Gartner predicts that by 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider such identity verification and authentication solutions to be reliable in isolation.
“In the past decade, several inflexion points in fields of AI have occurred that allow for the creation of synthetic images. These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,” said Akif Khan, VP analyst at Gartner.
“As a result, organisations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake.”
Akif Khan
It is observations like the above that put into question whether organisations are ready for a world where AI is embedded in almost every aspect of our lives.
The proliferation of synthetic identity fraud (SIF), which are fabricated identities created by combining real and fictitious personal information, has fuelled a surge in fraud across various industries. These synthetic identities are often used to open fraudulent accounts, obtain credit or loans, and engage in other illicit activities.
The rise of artificial intelligence (AI) technology has exacerbated this issue, enabling criminals to generate highly realistic fake identities with ease, including fabricated names, addresses, and even biometric data like facial images and voice recordings.
Organisations must adopt advanced techniques, such as behavioural analytics and biometric authentication, to detect and prevent synthetic identity fraud effectively. Failure to address this threat could result in substantial financial losses, damage to reputation, and erosion of customer trust.
Johan Fantenberg, principal solutions architect for APJ at Ping Identity, acknowledges that what makes synthetic identities worrisome is the purposeful combination of real and made-up data to create an account.
“For instance, combining a real social security number with a fake phone number, name, and email address increases the chance of creating a successful account,” he elaborates. He cites how such an approach can potentially enable threat actors to take out credits such as unemployment benefits and tax refunds and services from real beneficiaries. It also has severe implications for the victim's reputation.”
He goes on to elaborate that one of the key reasons is the broad application of synthetic identities.
“As much as we on the defence side are using good identity verification, behaviour, and analysis to fight against attacks, perpetrators are also refining synthetic identity attributes that can circumnavigate some of the node controls.”
Johan Fantenberg
“It is crucial to look into synthetic identities across different points in a user interaction, from account creation to transacting with a service,” he continues.
Synthetic detection challenges
Identity verification and authentication processes using face biometrics today rely on presentation attack detection (PAD) to assess the user’s liveness. “Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” said Khan.
Fantenberg clarifies that information like a license or photo, which is part of the identity, may be valid or existing, and the control may not be able to do a deeper identity verification. “We need to have multilayer processes on how we trust those roots of trust, and not just check the attributes but check the documents as a whole,” he suggests. “That is very important.”
He suggests that the first step is to ensure strong identity verification by conducting a detailed check of the system record to determine if the presented information is active and genuine. It is also essential to match the details, like photos, with the person presenting them, he continues.
“The second part is stopping automated attacks by detecting a bot from a human user. The other important part is to look at signals related to the device, context, and network that are associated with the request,” elaborates Fantenberg.
Enhancing synthetic identity countertactics
Gartner’s Khan says: “Organisations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection.”
Experts suggest that machine learning (ML) and data science have three advantages relative to how bad actors use SIF: intelligent automation, time savings, and compound benefits derived from leveraging IT and banking systems.
“All the defences should be embedded in a user journey or flow,” he begins. “For instance, the first line of defence in a registration or authentication flow is to detect if it's a human or machine if it's an automated attack or a genuine interactive user.”
He surmises that as a result, it's critical to check the consistency in user behaviour, including location and device information and transactional behaviour, and then make further assessments if this is a genuine and qualified user.
Responsibility
Gartner says once the strategy is defined and the baseline is set, CISOs and risk management leaders must include additional risk and recognition signals, such as device identification and behavioural analytics, to increase the chances of detecting attacks on their identity verification processes.
Fantenberg says the responsibility does not belong to one person or one role. He says it is a collaborative approach. “Various departments are involved as stakeholders, but ultimately, a risk assessment needs to be done or agreed on regarding the risks the business can tolerate,” he clarifies.
“Another angle is that AI can be used to analyse an interaction through the received data. Data analysts and machine learning aware can add their knowledge into these mitigation approaches to detect synthetic identities,” he adds.
He suggests that CISOs consider adding specific capabilities to their existing landscapes. For example, adding proper identity and fraud verification and bot detection layered on top of their current defences. Don't look at it as having to start from scratch.
“This should be seen as a compliment you can apply to the most relevant transactions and interactions rather than a replacement for the capabilities that the industry has built up over the years,” concludes Fantenberg.
Click on the PodChat player and hear Fantenberg deep dive into synthetic identities, how these are evolving with AI assistance, and measures CISOs can implement to counter the rising threat.
- What are synthetic identities, how are they created, and who (businesses/industries) are threatened most by these?
- Given all the experiences against fraud, all the technologies available today including AI-driven identity management solutions, and measures like AML/KYC, why should synthetic identities remain a concern for enterprises?
- Why is it challenging to detect synthetic identities during the account creation process?
- How can an organisation protect against synthetic identities created through the combination of legitimate and fake data?
- How can behavioural assessment and bot detection techniques be used to identify and prevent synthetic identity fraud?
- Who should be in charge of strategies to combat the threat of synthetic identity?