The impact of deepfake fraud on businesses continues to increase. In 2023, two in five enterprises experienced deepfake-related fraud, with average losses close to half a million dollars. One may think that businesses are less susceptible, given the safeguards that are in place. However, despite the multiple and complex layers of cybersecurity measures and compliance protocols, companies continue to fall prey to sophisticated socially engineering practices. Last year, a worker at a Hong Kong-based company was duped into wiring US$25 million to fraudsters. The orders appeared to come from the CFO during a live video call, seemingly attended by colleagues—but every participant was a deepfake.
In the Asia-Pacific region, deepfake fraud surged by 194% year over year. Experts now predict that by 2027, generative AI-enabled (GenAI) fraud will cost companies US$40 billion in the US alone. The past year has been a turning point for businesses and fraudsters alike, with developments in generative AI (GenAI) moving at breakneck speed. It’s fuelling innovation on one side, while making deception more convincing on the other. AI- and GenAI-infused attacks are scaling fast, hitting financial services, political campaigns, and even the healthcare industry. Fraudsters are impersonating CEOs, customer service reps, celebrities, and even government officials with unsettling realism.
For CIOs and CX leaders, these aren’t just an IT issue. They must rethink AI and GenAI’s role inside and outside the corporate walls where an executive’s voice, facial expressions, and even writing style can be “cloned” to extreme realism. We’ll see more businesses and cybercriminals engaging in an arms race over who can wield these innovations more effectively.
From threats to trust
AI in cybersecurity is hardly new and has long been on the CIO and CISO agenda. As early as 2020, 86% of cybersecurity decision-makers sounded the alarms over AI-infused attacks. Businesses have relied on layered security tools, refined policies, and employee training to combat threats, but AI is now changing the game, too.
A recent report, for instance, found that 82% of cybersecurity professionals believe AI will improve their job efficiency, reducing false alarms and allowing teams to focus on more urgent threats. Instead of chasing alerts, I see more of them managing AI systems, refining their detection models, and building adaptive defences.
AI in cybersecurity is increasingly shaping customer trust too. It’s making authentication seamless, verifying identities without adding friction. It’s also strengthening threat detection, flagging anomalies and stopping fraudulent activities before they reach customers. When security works smoothly in the background, customers feel it without knowing — and when it fails, they feel that, too.
This is also why companies can’t afford to treat privacy as a checkbox.
It is important to note that while AI toughen defences, it could also be used to sharpen the threats it’s meant to stop. It has the potential to predict attacks, detect threats proactively, and automate security at scale. For leaders, the question isn’t whether AI will shape cybersecurity — it already is. The real challenge: Who will wield it more effectively?
Turning AI experiments into measurable business wins
Just as fraudsters are making deepfakes more convincing, businesses must wield AI with even greater strategy, precision, and intent. The same disciplined, holistic approach that strengthens cybersecurity is also what separates digital, AI-driven transformation from a patchwork of disconnected experiments.
Last year, GenAI projects were everywhere — proofs of concept, pilot programs, and AI tools jammed into workflows, all in the rush to see results. Now, the era of experimentation is over. This year, AI’s worth won’t be judged by the number of projects that get launched, but by the business value they create. Moving from hype to measurable impact means aligning AI with business objectives, operational realities, and customer expectations.
More often than not, I’ve seen companies fall into the trap of treating AI as an add-on rather than a core business enabler. Without a clear strategy, even the most sophisticated solutions can end up as wasted investments.
Companies need a clear road map — paved with a solid foundation of data — to turn AI’s potential into real results. That’s what a global travel company did. With thoughtful implementation, seamless integration, and continuous refinement, they boosted productivity by 39%, increased daily case closures by 33%, and helped their CX frontliners exceed weekly resolution targets by 17.5%.
These aren’t isolated wins. They are proof that AI can deliver results when it’s woven seamlessly into the fabric of the organisation. AI won’t transform your business just because you have it. The difference between success and failure is not the technology — it’s how well your people wield it.
The answer to deepfake scams: education, process and technology
The rise of deepfake scams coincides with many businesses' struggles to effectively implement AI strategies. This is largely because the same gaps that hinder successful AI adoption—such as unclear goals, poor data quality, and misaligned priorities—create vulnerabilities that malicious actors can exploit.
A staggering 80% failure rate for AI projects, highlights fundamental issues in AI implementation. When organisations fail to integrate AI effectively, it often leads to incomplete safeguards and inefficient systems, leaving them ill-equipped to counter evolving threats like deepfakes. These challenges underscore the need for businesses to adopt a more comprehensive approach to AI that prioritises not just technological capabilities but also strategic alignment, process rigor, and employee readiness.
To combat deepfake threats and ensure successful AI adoption, businesses must focus on three key areas: education, process, and technology. Comprehensive training programs should be implemented to help employees recognise deepfakes and understand proper response protocols. Organisations need to establish robust verification procedures and decision-making processes. Additionally, investing in AI and machine learning tools for anomaly detection, combined with a clear AI strategy aligned with business objectives, can significantly enhance an organisation's resilience against deepfake scams and improve overall AI implementation success.