By 2026, Gartner predicts that 30% of enterprises will no longer consider identity verification and authentication solutions reliable in isolation due to attacks using AI-generated deepfakes on face biometrics.
“In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,” said Akif Khan, VP Analyst at Gartner.
Mitigating deepfake threats
Khan explained that current standards and testing processes to define and assess presentation attack detection (PAD) mechanisms do not cover digital injection attacks using AI-generated deepfakes. As injection attacks increased by 200% in 2023, a combination of PAD, injection attack detection (IAD), and image inspection are required.
“Organisations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection,” said Khan.
Security officers and risk management leaders should deploy device identification and behavioural analytics to detect genuine human presence and attacks on identity verification processes.