Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Argentina’s hot spot for Antarctic cruises insists it didn’t cause the hantavirus outbreak

    WHO head seeks to reassure residents of Spanish island where hantavirus-stricken ship is headed

    Northwestern Lands Commitment from QB RJ Day, Son of Ohio State’s Ryan Day

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest VKontakte
    Sg Latest NewsSg Latest News
    • Home
    • Politics
    • Business
    • Technology
    • Entertainment
    • Health
    • Sports
    Sg Latest NewsSg Latest News
    Home»Technology»Rethinking identity in the age of AI impersonation
    Technology

    Rethinking identity in the age of AI impersonation

    AdminBy AdminNo Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    For decades, trust in business hinged on simple human instincts. It used to be that when we saw a familiar face or heard a trusted voice, we instinctively believed we were dealing with the real person. That assumption is now dangerous.

    In the past 18 months, deepfakes have moved from novelty to weapon. What started as clumsy internet pranks has become a mature cybercriminal toolset. Finance teams have been duped into wiring millions after video calls with “executives” who never logged on. The secretary of state in Florida was impersonated to contact foreign ministers. Even the CEO of Ferrari was impersonated in a fraud attempt. These are not edge cases; they’re a glimpse of what’s to come.

    The cost is measured not only in money, but in the erosion of confidence. When we can no longer believe what we see or hear, the very foundation of digital trust begins to crumble.

    Why now?

    What’s changed is not intent; fraudsters have always been inventive. What’s changed is accessibility. Generative AI (GenAI) has democratised deception. What once required specialist labs and heavy computing power can now be done with an app and a laptop. A single audio clip scraped from a webinar, or a handful of selfies on social media, is enough to create a credible voice or face.

    We are already seeing the fallout. Gartner research found that 43% of cyber security leaders had experienced at least one deepfake-enabled audio call, and 37% had encountered deepfakes in video calls. The quality is improving, the volume is accelerating, and the barrier to entry has collapsed.

    Technology alone can’t save us

    Vendors have not stood still. Voice recognition providers are embedding deepfake detection into their platforms, using neural networks to score the likelihood that a caller is synthetic. Face recognition systems are layering in liveness checks, metadata inspection and device telemetry to spot signs of manipulation. These are necessary developments, but they are not sufficient.

    Detection is always reactive. Accuracy against last month’s fakes does not guarantee protection against this week’s. And outcomes are probabilistic: systems return risk scores, not certainties. That leaves organisations making difficult decisions at scale: who to trust, who to challenge, based on signals that can never be perfect.

    The truth is that no detection tool can carry the weight of defence on its own. The deepfake problem is as much about people and processes as it is about algorithms.

    The human weak point

    Technology is only half the battle. The most costly deepfake incidents to date haven’t bypassed machines; they’ve tricked people. Employees, often under pressure, are asked to act fast: “Transfer the funds,” “Reset my MFA,” “Join this unscheduled video call.” Add a credible face or familiar voice, and hesitation evaporates.

    This is where CISOs and security and risk management leaders need to get pragmatic. Employees should never be placed in a position where a single phone call or video chat can trigger a catastrophic action. If a request feels urgent, if it involves money or access, it must be backed by additional proof.

    This isn’t about slowing business down. It’s about building resilience. Asking a question only the real person would know, escalating sensitive requests through independent channels, or mandating phishing-resistant multi-factor authentication before approvals, these are the guardrails that stop a fake from becoming a fraud. Sometimes the simplest techniques are the most effective.

    The battle for trust

    The implications extend beyond corporate losses. Deepfakes are now fueling disinformation campaigns, spreading political falsehoods, and eroding trust in public institutions. In some cases, genuine footage is dismissed as “fake news”. Even authenticity is under suspicion.

    Governments are beginning to respond. Denmark and the UK have introduced or are considering new laws to criminalise the creation and sharing of sexually explicit deepfakes. In the United States, new legislation makes non-consensual deepfake media explicitly illegal. These are important steps, but the law alone cannot keep pace with the speed of generative AI.

    For businesses, the responsibility is immediate and unavoidable. CISOs cannot wait for a perfect regulatory solution. They need to assume that deception will be part of every interaction and design their organisation, accordingly.

    Designing with deception in mind

    So how should organisations act? The answer lies in combining layered technical safeguards with hardened business processes and a culture of healthy scepticism. CISOs should:

    • Use deepfake detection tools, but don’t rely on them in isolation.
    • Ensure that critical workflows such as money transfers, identity recovery, and executive approvals are never reliant on a single point of trust.
    • Equip employees with the training and confidence to challenge even a familiar face on screen if something feels off.

    Take biometric systems as an example. A layered approach: presentation attack detection (catching artefacts shown to a camera), injection attack detection (spotting synthetic video streams), and context signals from devices or user behaviour, builds real resilience. In practice, it may not be the deepfake itself that is detected, but the unusual patterns that come with its use

    At the end of the day, CISOs and security and risk management leaders need to shift how they think about identity. It’s no longer something that can be assumed from sight or sound; it has to be proven. 

    The bigger picture

    We are in an era where seeing is no longer believing. Identity, the cornerstone of digital trust, is being redefined by adversaries who can fabricate it at will. The organisations that adapt quickly by layering technical safeguards with resilient business processes will blunt the threat. Those that don’t risk not just fraud losses but a collapse in trust, both inside and outside their walls.

    Deepfakes won’t be solved by one clever tool or a procurement decision. They demand a shift in mindset: assume the face or voice in front of you could be fake and design your security accordingly.

    The attackers are moving fast. The question is if defenders can move faster.

    Gartner analysts are exploring digital identity and trust at the Gartner Security & Risk Management Summit taking place this week in London (22–24 September).

    Akif Khan is a VP analyst at Gartner

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Admin
    • Website

    Related Posts

    Microsoft reveals why some Windows 11 updates take ages to install

    The new Wild West of AI kids’ toys

    Denon Home series speakers review: Siri & superior sound

    Google settles racial discrimination lawsuit for $50 million

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Electrical fire to keep theater that hosts ‘The Book of Mormon’ closed through May 17

    The 2026 Grammy Award nominations are about be announced. Here’s what to know

    Disease of 1,000 faces shows how science is tackling immunity’s dark side

    Judge reverses Trump administration’s cuts of billions of dollars to Harvard University

    Top Reviews
    9.1

    Review: Mi 10 Mobile with Qualcomm Snapdragon 870 Mobile Platform

    By Admin
    8.9

    Comparison of Mobile Phone Providers: 4G Connectivity & Speed

    By Admin
    8.9

    Which LED Lights for Nail Salon Safe? Comparison of Major Brands

    By Admin
    Sg Latest News
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • Get In Touch
    © 2026 SglatestNews. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.