Fraud, stock-price manipulation, damage to reputation and the brand, sextortion scams that sabotage employee morale, misinformation, and disinformation. These are five deepfake scams that Forrester VP and principal analyst, Jeff Pollard warns organisations have to take seriously now before these become weaponised against enterprises.
According to Jonas Walker, director of Threat Intelligence, FortiGuard Labs at Fortinet, a deepfake refers to a video or a voice of someone else used by attackers to impersonate others for different kinds of purposes. It could be for financially motivated attacks, to create scams and for politically motivated campaigns.
LLMs facilitate deepfakes
A Reuters article quoted Deep Media as forecasting up to 500,000 video and voice deepfakes will be shared on social media around the world. The Reuters article goes on to comment that “cloning a voice used to cost US$10,000 in server and AI-training costs up until late last year, but now startups offer it for a few dollars.”
“For example, if you are a public figure, it's very easy to create deepfakes of you because it is created by a large language model (LLM). Cyber attackers can feed the recordings into the model so it can create a fake recording in which the person can speak various kinds of languages such as Italian, and Polish, which the person is not capable of.”
Jonas Walker
Vigilance remains the name of the game
As with all things related to cybersecurity, vigilance is the best practice for most organisations. Walker acknowledges that deepfakes are not new and that enterprises have been (finally) catching up quite a bit because threat actors realise that they can use deepfake technology to scam for monetary returns.
He cited the Singapore Police Force (SPF) as reporting SG$330 million lost to scammers for the first six months of 2023. “Attackers are now using deep fakes to trick employees into pretending to be their manager or their boss, especially given that person has video recordings of themselves on YouTube,” added Walker.
The victims aren’t necessarily those who are unfamiliar with technology. The Singapore Police Force (SPF) reported almost more than 50% of total victims were between 20 and 40 years old.
“These are digital natives, Gen Z who use media and are tech-savvy. Deepfakes create a sense of urgency for the call to action based on emotions. If someone sends you an email with a payment that needs to be done right now, it raises red flags,” said Walker.
Weaponizing deepfakes is so easy
Walker concurs that accessibility to deepfake technology is ‘easy.’ “Cyber attackers understand that the more publicly available information on a specific individual, the easier it is to create,” he continued.
He observes that attackers do it because all they have to do is use open-source intelligence to find the information available to them. “If I want to attack a certain organisation, I will look for a high-profile individual’s recording on YouTube download that recording and upload it to one of these free apps available online. Cyber attackers can write their script, pretending to be a CEO, or sending a message to an employee, which may increase the chances of scamming,” he continued.
What CISO and CIO can do
Walker says there are solutions which focus on identifying if it is a real or deepfake video. Deepfake tends to have a higher purpose. “For example, someone receives a deep fake video from an attacker who's merely financially motivated. His goal is to make money; he uses that video message to scam someone instead of just focusing on the deep fake itself,” he added.
He suggests people need to look at the whole process and have top of mind reminder, that these attacks will not be as successful as they seem.
Options for mitigating the threat of deepfake
Asked if regulations to clamp down on deepfakes are the only way to go about protecting enterprises, Walker acknowledged the role of regulation in mitigating such risks, but he opined not to expect it (regulation) as the silver bullet.
He is quick to remind us that threat attackers care little for regulation and do not recognise boundaries. “One doesn’t need to be in the same country as your potential victim. For example, if I live in Jamaica, I can pretty much ignore these regulations,” he posits. “These tools are easily available online for me to use. Regulations will help to reduce the noise if people follow them. Here, we are talking about criminals, who won’t follow regulations.”
A new generation of threats
Increased cyber incidents during the pandemic suggest that cybercriminals are taking on more aggressive postures, including becoming co-ordinated and creating communities in the dark web.
Walker believes that security is paramount where software is involved. “If a system is connected online, unpatched and not properly secured, it will be attacked and hacked quickly,” he warned with certainty. “Thus, security has to be at the top of mind. CIOs and CISOs have to understand how this affects your attack vector if you innovate and create new technology, especially for anything online. No matter how new the technology, it still boils down to security,” he continued.
Click on the PodChat player to hear in greater detail Walker’s recommendations for securing enterprises from deepfake.
- In corporate security lingo, what is a deepfake?
- Is it easy to create a deepfake version of someone?
- We understand deepfakes to scam individuals and Financial Institutions as well, are there any new deepfake scams one should be vigilant about?
- Is it foreseeable that someone can authorise the use of his or her deepfake persona for use in various purposes to make campaign calls and to record messages, authorising someone’s signature and having it reproduced?
- What kind of education, in your opinion, can help the digital and non-digital savvy citizens in spotting deepfake scams, given the sophistication level?
- You mentioned that police. Will regulations to clamp down on deepfake apps be the only way to go in the future?
- What’s in store for enterprises in 2024?