• About
  • Subscribe
  • Contact
Thursday, May 8, 2025
    Login
FutureCISO
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
No Result
View All Result
FutureCISO
No Result
View All Result
Home Resources Blogs

Mitigating AI-powered fraud in APAC

Melinda Baylon by Melinda Baylon
March 19, 2024
Mitigating AI-powered fraud in APAC

Mitigating AI-powered fraud in APAC

Share on FacebookShare on Twitter

A study by Sumsub revealed that the Asia Pacific (APAC) region experienced a 1530% surge in deepfake cases from 2022 to 2023, marking it the second-highest region in the world with the most deepfake cases.

Meanwhile, Hong Kong was in the top 5 most vulnerable regions globally to identity fraud. With the increasing use of artificial intelligence, deepfake fraudsters exploit the technology to attack vulnerabilities in identity verification.

Andrew Novoselsky

Andrew Novoselsky, chief product officer at Sumsub, explains some types of Artificial Intelligence (AI)-powered fraud happening recently in APAC, the top factors contributing to its surge, its risks and dangers to organisations, and practical security measures organisations can implement to mitigate risks.

AI-powered fraud 

Novoselsky says the most common types of AI-powered fraud in the region are deepfake fraud, identity fraud, and account takeover fraud. Among them, he says deepfake fraud has become concerning due to its alarmingly increasing incidents in APAC in recent years. 

The Philippines, Vietnam, Japan, Sri Lanka, and Australia, emerged as the top 5 countries experiencing growth in deepfakes according to a Subsub report. Vietnam (25.3%) and Japan (23.4%) accounted for the majority of APAC deepfake fraud in 2023. 

Novoselsky says “The crypto sector (87.7%) stands out as the absolute leader in deepfake cases, followed by fintech (7.7%). The sector’s digitalised nature, potential for significant financial gain, and ongoing regulatory challenges create vulnerabilities that fraudsters exploit.”

The Sumsub executive says that malicious players can use AI voice cloning technology to create realistic deepfake videos or audio recordings for malicious activities, such as “opening fraudulent accounts, sending phishing emails, applying for loans, or conducting fraudulent transactions”. 

“They can even pass verification checks, making it easier to bypass fraud prevention measures. The recent deepfake video conference scam involving the Hong Kong branch of a multinational firm with the loss of US$25.6 million, reflects the evolving sophistication of deepfake scams,” he says. 

Factors contributing to AI-powered fraud

With the skyrocketing surge in deepfakes in APAC, Novoselsky posits this can be attributed to the advancement and the increasing accessibility of AI technologies. 

“In particular, we observed that audio channels are becoming a popular platform for deepfakes. Fraudsters can even use AI-powered algorithms to learn from existing fraud prevention systems, enabling them to develop more advanced evasion techniques,” he says. 

Related:  Manage risk with better visibility and control

Novoselsky also observed that increasing digital financial transactions in the emerging APAC market make it a target for deepfakes. “ASEAN is one of the fastest-growing regions, with average real GDP growth forecast to reach 4.8% this year and it is expected to be the fourth-largest economy in the world by 2030. Since a high volume of instant cross-border transactions takes place in the region, especially in Hong Kong and Singapore, two international financial hubs, deepfake scammers can leverage the complexity and volume of financial dealings to carry out fraudulent activities, such as fake invoices, false investment advice, or payment diversion schemes.”

He also says that inadequate or outdated regulations also contribute to the surge of deepfake incidents in APAC. 

“Like many regions, APAC is still developing comprehensive regulations specifically targeting AI-powered fraud, and thus fraudsters can exploit the regulatory gap to carry out illicit activities. What is more, verifying identities across different jurisdictions and complying with varying regulations further complicates the verification process and foments deepfake fraudulent activities.”

Andrew Novoselsky

He says China, has pioneered deepfake regulations with the “Regulations on the Administration of Deep Synthesis of Internet Information Services” since August 2023. He hopes more APAC countries to enact specific laws targeting AI-powered fraud.

Risks and dangers 

An organisation vulnerable to AI-powered fraud may face risks and dangers, especially significant financial losses. 

Scammers can use deepfake technology to impersonate the parties that the victims trust, carry out account takeovers, identity theft, or unauthorised transactions, leading to direct financial losses.

Andrew Novoselsky

Aside from financial losses, Novoselsky posits that organisations falling victim to AI-powered can suffer reputational damage and erosion of customer trust. 

“If customers perceive an organisation as unable to protect their data or prevent fraudulent activities, they may switch to using their competitors’ services, resulting in a loss of customers and market share,” he shares. 

Finally, operational disruptions riddle organisations dealing with AI-powered fraud, such as system downtime, loss of productivity, and disruption of business processes. 

Policies CIOs can initiate

Novoselsky agrees that the rise of AI is double-edged. “While it can be wielded for malicious purposes, it also stands as an ally for anti-fraud solution providers. Chief information officers (CIOs) are advised to establish rigorous validation processes for AI models used in fraud detection to ensure their accuracy, reliability, and resistance to manipulation,” he says. 

Related:  Microsoft gets cybersecurity boost from Sophos and Veeam

He also reminds CIOs to establish a dedicated team to monitor AI-powered fraud who must regularly update fraud detection rules and algorithms to mitigate risks. “CIOs and their teams from different organisations can also foster collaboration and information sharing within the industry to address emerging AI-powered fraud threats,” he shares. 

He also underscores the importance of employee training to enhance awareness of AI-powered fraud. Novoselsky suggests CIOs initiate training programs on the safe use of GenAI, AI-powered fraud risks such as social engineering tactics, synthetic identities, and deepfake scams. 

Practical security measures 

“To stay ahead of the evolving AI-powered fraud landscape, organisations are suggested to implement a multistage verification approach and embed deepfake detection solutions throughout all Know Your Customer (KYC) processes,” Novoselsky adds. 

Before onboarding, he adds that organisations can conduct pre-checks to identify potential malicious actors utilising predictive and specialised deepfake detection tools. Novoselsky also encourages organisations to ensure such tools are always updated. 

“During the KYC onboarding process, organisations are advised to incorporate deepfake detection into liveness checks. This ensures that the individuals interacting with the organisation's systems or involved in financial transactions are legitimate. Use advanced technologies, such as biometric verification and liveness detection, to facilitate document-free verification or full-cycle verification solutions, enabling faster and easier identity confirmation and mitigating the risk of fraudulent activities,” he adds. 

He says safeguarding against AI-powered fraud involves continuous transaction monitoring through early detection of suspicious activities, real-time behavioural analysis, and risk assessment.

“Ongoing monitoring for anomalies related to deepfake scams, such as sudden changes in communication patterns or financial requests, can be flagged and investigated promptly by utilising professional and advanced verification tools,” he says. 

Mitigating risks associated with AI-powered fraud is possible through the team effort of the members of an organization from the top down.

“By adopting these anti-fraud measures, organisations can protect themselves and mitigate risks caused by AI-powered fraud,” Novoselsky concludes.

Tags: AI-powered fraudArtificial IntelligenceCustomer experiencecybersecuritySumsub
Melinda Baylon

Melinda Baylon

Melinda Baylon joins Cxociety as editor for FutureCIO and FutureIoT. As editor, she will be the main editorial contact for communications professionals looking to engage with aforementioned media titles. 

Melinda has adecade-long career in the media industry and served as TV reporter for ABS-CBN and IBC 13. She also worked as a researcher for GMA-7 and a news reader for Far East Broadcasting Company Philippines. 

Prior to working for Cxociety, she worked for a local government unit as a public information officer. She now ventures into the world of finance and technology writing while pursuing her passions in poetry, public speaking and content creation. 

Based in the Philippines, she can be reached at [email protected]

No Result
View All Result

Recent Posts

  • Reimagining security for the AI Era
  • PodChats for FutureCISO: Articulating the business value of security in 2025
  • New standard for cybersecurity at the storage layer
  • Cybersecurity challenges persist despite improved defenses
  • Weak password reuse crisis remains

Categories

  • Blogs
  • Compliance and Governance
  • Culture and Behaviour
  • Cybersecurity careers
  • Data Protection
  • Endpoint Security
  • Incident Response
  • Network Security
  • People
  • Process
  • Resources
  • Risk Management
  • Technology
  • Training and awarenes
  • Videos
  • Webinars and PodChats
  • White Papers

Strategic Insights for Chief Information Officers

FutureCISO serves the interests of the Chief Information Security Officer (CISO) and the information security profession. Its purpose is to provide relevant and timely industry insights around all things important to security professionals and organisations that recognize and value the importance of protecting the organisation’s data and its customers’ privacy.

Cxociety Media Brands

  • FutureIoT
  • FutureCFO
  • FutureCIO

Categories

  • Privacy Policy
  • Terms of Use
  • Cookie Policy

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl

Login to your account below

or

Not a member yet? Register here

Forgotten Password?

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
Login

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl