• About
  • Subscribe
  • Contact
Wednesday, May 7, 2025
    Login
FutureCISO
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
No Result
View All Result
FutureCISO
No Result
View All Result
Home People Culture and Behaviour

User education key in combating deepfakes as Singapore heads to election polls

Eileen Yu by Eileen Yu
April 24, 2025
Share on FacebookShare on Twitter

With deepfakes increasingly difficult to identify and regulations still playing catchup, end-users are urged to assume more responsibility in protecting themselves against online scammers.

More concerted efforts to fight such cybercrime will be critical as artificial intelligence (AI) generated images and videos, particularly deepfakes, grow in scale and sophistication.

This is fuelled by increased online access to tools and resources needed to run quality deepfakes, which saw exponential growth over the last 18 months, iProov CTO Dominic Forrest told FutureCISO.

He noted that considerable amount of compute power previously was needed to deploy such content, which limited its access and use. Thanks to technological advancements, it is now possible to run high quality and credible deepfakes on any gaming PC or even mobile phones.

“You can [digitally] generate a person from scratch with a single image,” said Forrest, who cautioned the implications of this were significant. Reputable journalists, for instance, potentially can be scammed into interviewing deepfakes of politicians on a live broadcast and unknowingly spread disinformation.

The advancements also meant that deepfakes were no longer easily detectable by a human eye, he said.

Dominic Forrest

At the same time, the rate of change remains fast and the proliferation of tools as well as their ease of use are increasing quickly, he added.

He noted a significant increase in native virtual camera attacks, which climbed 2,665% last year, fuelled partly by its infiltration in mainstream app stores. These malicious applications run directly on mobile phones, allowing attackers to inject pre-recorded or deepfakes and bypass the device’s camera to spoof biometric authentication systems.

With easier and more ways to run deepfakes cost effectively, such attacks will only continue to grow in volume and quality, he said.

Aaron Bugal, Sophos’ Asia-Pacific Japan field CTO, concurred, noting that it is increasingly difficult to detect AI-generated content and deepfakes.

It underscores the need for everyone to be discerning when they consume information, Bugal said in a video interview with FutureCISO.

He noted that while most legitimate platforms such as news sites, would label and watermark content that had been generated with the help of AI, many others including social media forums or chat groups might not adopt such practices. The latter can be riddled with content that contain deepfakes and AI-generated images and videos. Users may not want to regard such sites as a trusted source of global news, he suggested.

Employee education vital as deepfakes pass human detection

"It goes back to the users to validate the integrity of messages and, at the very least, if they’re unsure, to engage in conversations or go to trusted sites to validate [the content],” Bugal said.

“It’s no different from [how we manage] spam in our inbox or phishing email. That’s the level of awareness we need to have moving into [the deepfake] environment,” he said.

As it is, tests have revealed that just 0.1% are able to correctly identify deepfake and AI-generated images and videos, according to a February 2025 report by iProov, which assessed 2,000 consumers in the UK and US.

Deepfake videos, in particular, were more difficult to identify than deepfake images, the study found. Some 36% were less likely to correctly identify a synthetic video than they would a deepfake image.

Aaron Bugal

Despite the revelation that 99.9% were unable to identify deepfakes, more than 60% of respondents were confident of their ability to detect deepfake content, the study noted.

A further 22% had never heard of deepfakes before sitting through the iProov test.

Consumers’ misplaced sense of security can be a concern when scammers now are able to tap AI, deepfakes, and advanced social engineering tactics to launch attacks that are harder to detect, according to a Trend Micro report released in April 2025.

The security vendor cited its study that revealed 50% of consumers in Singapore expressed confidence in their ability to identify scams by looking for grammar or spelling errors, while 70% believed they were safe from text message scams if they avoided clicking on suspicious links.

Related:  Kissflow added governance layer to low-code platform

“As cybercriminals adopt AI and other advanced technologies, many consumers remain misinformed about the full extent of the risks they face,” Ashley Millar, Trend Micro’s consumer education director, said in the report.

And the use of deepfakes is growing, according to iProov’s 2024 Threat Intelligence Report, which found a 704% spike in face swaps alone. The technique is a type of deepfake.

The iProov deepfake study also noted that social media platforms were deemed “breeding grounds for deepfakes”, with 49% and 47% of respondents viewing Meta and TikTok, respectively, as the most prevalent locations for deepfakes found online.

The findings indicate organisations can no longer rely on human judgment to spot deepfakes and must look to alternative means of authenticating the users of their systems and services, said Edgar Whitley, a digital identity expert and professor at the London School of Economics and Political Science, in the report.

“Just 0.1% of people could accurately identify the deepfakes, underlining how vulnerable both organisations and consumers are to the threat of identity fraud in the age of deepfakes,” noted Andrew Bud, iProov’s founder and CEO of iProov. “Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting our personal information and financial security at risk. “

User education is crucial, Forrest reiterated, adding that employees should be trained to deal with deepfakes in their work environment, similar to how they are guided to spot phishing email.

Corporate culture and mindset also need to adapt, Bugal said, alongside the right processes and policies. For instance, employees should not fear reprisal and be apprehensive about questioning a video showing their CFO requesting a large funds transfer, if they suspect a deepfake was used in the clip.

They also should know to examine such high-level instructions if they are sent over text messages, since this likely would be in breach of corporate policy, he said.

Consumers, too, should be encouraged to question content they encounter online and decide if they trust the source of such information, Forrest said.

“Education is the first place to start,” he stressed. “If people understand the risks, then they can do something about it.”

This is essential especially for countries such as Singapore, which are prime targets of cybercriminals because they have strong digital economies and built digital infrastructures that are trusted by their general population, Forrest said. He cited Singapore’s national authentication system Singpass as an example.

He added that the Asian nation, like many others worldwide, has seen growing cases of online scams and scam-related losses.

Singapore last year saw a 10.8% increase in the number of scam and cybercrime cases, with SG$1.1 billion lost to scams, according to the Singapore Police Force. More than SG$182 million were successfully recovered by the country’s Anti-Scam Command unit.

The growing trend is a global one, noted Forrest, with scammers often sharing techniques and exchanging information via online forums.

He described communication networks used by cybercriminals as “superb” and well-functioning.

In contrast, it is rare to see c-level executivecs of different organisations talking to each other and sharing intel on cybersecurity incidents and tools, he noted. “So the criminals have a headstart here,” he said.

Laws need to catch up

The advanced state of deepfakes further underscores the need for laws to change and governments to step forward, according to Bugal.

Singapore, for instance, is among those that have explicitly said the use of deepfakes during elections is punishable by law, he said.

Singapore last October announced new measures to safeguard against the use of digitally manipulated content or deepfakes, including AI-generated audio, images, and videos, during elections.

Related:  91% of all DDoS attacks in APAC targeted financial services

The added rules under its Online Election Advertising Act aimed to address deepfakes that misrepresented candidates and protect the integrity of the country’s electoral process, Singapore’s Ministry of Digital Development and Information (MDDI) then said.

The legislation prohibits the publication of digitally generated or manipulated online election advertising that realistically depicts candidates saying or doing something they did not say or do. Such content includes the use of AI and non-AI techniques, such as editing via dubbing or splicing.

There is a noticeable increase of deepfake incidents in countries where elections have taken place or are planned, said Minister for Digital Development and Information Josephine Teo, in her parliamentary comments last October on the new legislative measures.

Citing research from Sumsub, Teo said countries such as Indonesia and South Korea saw 15 times and 16 times, respectively, more deepfake incidents during their recent elections. “AI-generated misinformation can seriously threaten our democratic foundations and demands an equally serious response,” she said.

Under the new rules, applicable for the first time as Singapore heads to the polls on May 3, the publication of offending content -- whether favourable or unfavourable to any candidate -- is prohibited during the election period. This includes the sharing or reposting of such content.

Teo added that the Singapore government will use detection tools to assess if content has been generated or manipulated digitally.

Corrective directions can be issued to relevant people or organisations, including social media platforms, to remove or disable access in Singapore to prohibited online election advertising. Failure to comply to such instructions is deemed an offence, which carries fines of up to SG$1 million for a social media services provider.

Such legislative efforts set a precedence that the use of deepfakes is not acceptable and will act as deterrence, at least for the general public, Bugal said.

“It gets people to rethink and content creators to think twice if they want to use the tool to do something,” he said.

Regulatory reform should extend beyond elections so they apply to general content, and users are not defrauded on any given day, he added.

Organisations, including banks and major businesses, also would want to mitigate such risks and keep customers and their assets safe, he said.

Asked how defence systems should be enhanced to address the rise of more sophisticated deepfakes, Forrest highlighted the need to ascertain the “liveness” of a person and establish multilayered, defence in depth.

For example, iProov uses liveness detection to analyse various signals from the imagery and device to determine if the interaction is with a real person and identify signs of spoofs. The security vendor specialises in biometric authentication.

It regularly updates its products to ensure they keep up with changes in tools or techniques used by scammers and are accurate in their ability to detect potential deepfakes, Forrest said. This includes testing to ensure they work effectively regardless of the user’s characteristic traits, such as skin colour or shape, which may vary in regions such as Asia-Pacific.

iProov works with several governments in the region on their respective national identification and authentication platforms, including Singapore, Sarawak in Malaysia, and Australia.

Both Sophos and iProov also are amongst cybersecurity vendors that are incorporating AI and leveraging machine learning across their operations and products. These encompass, among others, the use of AI models to analyse malicious codes and network traffic as well as to detect anomalies.

With the emergence of AI agents, Bugal also is keeping a close eye on new forms of attacks that may surface.

He noted that agentic AI creates pockets of AI assigned to do specific tasks, before passing on to the next agent in the workflow. This potentially opens up areas on which cybercriminals can next focus their attacks, he said.

Tags: Artificial Intelligencecybersecuritydeepfakegenerative AIrisk management
Eileen Yu

Eileen Yu

Eileen is currently an independent tech journalist and content specialist, providing analysis of key market developments across the Asian region and helping enterprises craft their communications plan. She also moderates panel discussions and roundtables, as well as provides media training to help senior executives better manage press interviews. Eileen has worked with corporate clients in markets, such as cybersecurity and enterprise software, and non-tech including financial services and logistics. She also has planned high-level panel and roundtable discussions and has been an invited speaker on online media. On CXOCIETY, she contributes articles across the four CXOCIETY brands -- FutureCIO, FutureCISO, FutureIoT, and FutureCFO -- covering key industry developments impacting the Asia-Pacific region, including cybersecurity, AI, data management, governance, workforce modernisation, and supply chain. Eileen has more than 25 years of industry experience at established media platforms, including ZDNET in Singapore, where she led the tech site's Asian editorial team and blogger network. Before her stint at ZDNET, she was assistant editor at Computer Times for Singapore Press Holdings and deputy editor of Computerworld Singapore. With her extensive industry experience, Eileen has navigated discussions on key trending topics including cybersecurity, artificial intelligence, quantum computing, edge/cloud computing, and regulatory policies. Eileen trained under the Journalism department at The University of Queensland, Australia. There, she earned a Bachelor of Arts (Honours) degree in Journalism, with a thesis titled, To Censor or Not: The Great Singapore Dilemma.

No Result
View All Result

Recent Posts

  • Reimagining security for the AI Era
  • PodChats for FutureCISO: Articulating the business value of security in 2025
  • New standard for cybersecurity at the storage layer
  • Cybersecurity challenges persist despite improved defenses
  • Weak password reuse crisis remains

Categories

  • Blogs
  • Compliance and Governance
  • Culture and Behaviour
  • Cybersecurity careers
  • Data Protection
  • Endpoint Security
  • Incident Response
  • Network Security
  • People
  • Process
  • Resources
  • Risk Management
  • Technology
  • Training and awarenes
  • Videos
  • Webinars and PodChats
  • White Papers

Strategic Insights for Chief Information Officers

FutureCISO serves the interests of the Chief Information Security Officer (CISO) and the information security profession. Its purpose is to provide relevant and timely industry insights around all things important to security professionals and organisations that recognize and value the importance of protecting the organisation’s data and its customers’ privacy.

Cxociety Media Brands

  • FutureIoT
  • FutureCFO
  • FutureCIO

Categories

  • Privacy Policy
  • Terms of Use
  • Cookie Policy

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl

Login to your account below

or

Not a member yet? Register here

Forgotten Password?

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
Login

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl