In Singapore, a sophisticated deepfake scam recently came to light involving senior minister Lee Hsien Loong. On June 2 this year (2024), the former Prime Minister of Singapore revealed on Facebook that scammers had created a video falsely portraying him endorsing an investment scheme with guaranteed returns.
The fraudsters went beyond merely mimicking his voice; they overlaid it on genuine footage from his 2023 National Day message and meticulously synchronised his mouth movements with the fabricated audio.
The incident is a stark reminder of how rapidly AI technologies are advancing, blurring the lines between reality and deception. Deepfake technology, once limited to experimental labs, is now accessible to malicious actors, posing new challenges for individuals, businesses, and governments.
Threat actors are increasingly leveraging artificial intelligence (AI) to extract personally identifiable information from social media profiles and public websites, enhancing the speed and scale of highly personalised social engineering attacks, as noted by the Cyber Security Agency of Singapore (CSA) in its Singapore Cyber Landscape Report for 2023.
In February 2024, a seemingly routine office day turned into a high-stakes scam. A senior employee at a British engineering company was deceived into transferring approximately US$25 million (HK$200 million) to five bank accounts in Hong Kong.
The fraudsters used deepfake technology to impersonate the company’s chief financial officer (CFO) during a video conference. The employee was led to believe that the CFO had authorised the transfer. However, it was later discovered that the participants in the call were sophisticated deepfakes created by cybercriminals using advanced AI.
These elaborate heists highlight a pressing issue: deepfakes, AI-driven phishing, and AI-generated malware are becoming increasingly prevalent tools in the hands of fraudsters.
Generative AI (GenAI) has the potential to revolutionise the digital landscape but also poses complex security challenges for businesses and policymakers. Globally, cybersecurity researchers have reported a sharp rise in phishing, largely driven by the misuse of GenAI chatbots that enable the mass production of highly convincing phishing emails with polished, human-like language and minimal errors.
Approximately 13% of phishing scams in 2023 analysed by CSA were likely generated by AI. CSA’s recent report highlights that AI lowers the entry barrier for less skilled and opportunistic threat actors by automating complex tactics, allowing even novice cybercriminals to execute sophisticated attacks.
Probabilistic AI models and their security challenges
Unlike traditional software, which operates on rigid algorithms, GenAI relies on probabilistic models. This fundamental shift—from fixed rules to probabilistic decision-making—makes AI both a significant advancement and a formidable security concern.
In traditional software, deviations from expected behaviour, such as a programme answering “Green” instead of “Blue” to a question about the sky, are relatively straightforward to detect and address with established security rules. In contrast, GenAI’s responses are inherently flexible and creative.
For instance, if asked about the colour of the sky, it might respond with “Black at night; Red during sunset; Grey when foggy,” or provide a detailed explanation of atmospheric science. This creative latitude complicates the task of distinguishing between benign variability and dangerous anomalies.
The risks associated with GenAI are closely tied to its dependence on high-quality data. Without accurate, consistent, and relevant training data, GenAI systems can become ineffective. Moreover, as these systems learn from user inputs, harmful or incorrect data can corrupt their learning processes.
Real-world examples, such as Microsoft’s chatbot “Tay”, which devolved into a cesspool of offensive behaviour within a day of going online, illustrate these risks. Designing a public-facing AI system that can guard against “data poisoning” from malicious users requires innovative security architecture.
Managing Generative AI risks in corporate data
In the corporate world, GenAI is being explored for querying vast internal databases. Unlike traditional search engines, GenAI learns from interactions, raising concerns about its ability to uncover sensitive information that users may not be authorised to access. Balancing the utility of GenAI with privacy and confidentiality remains a delicate challenge.
Moreover, GenAI tools rely on natural language prompts from users. Our Research & Development team at Ensign InfoSecurity has demonstrated techniques that can expose sensitive corporate data or “jailbreak” the AI, making it ignore its initial programming. This highlights the need for vigilant security practices.
Another malicious application of this technology involves AI-powered tools like PassGAN, which can quickly crack over half of the common passwords in under a minute. These tools use AI to streamline brute force attacks—significantly enhancing the efficiency of password cracking, as noted in the annual Singapore Cyber Landscape 2023 report.
This growing capability reflects the broader trends in the Generative AI market. According to Statista, Singapore’s Generative AI market is projected to reach US$0.52 billion in 2024, with an anticipated annual growth rate of 46.26% from 2024 to 2030, bringing the market size to US$5.09 billion by 2030. This rapid expansion underscores the increasing integration of AI technologies and their potential positive and negative impacts on various sectors.
Strategy for managing AI risks
AI systems, while innovative, bring risks that scale up from traditional software. They rely on vast data and intricate technology stacks, expanding their attack surface. The need for extensive data to train AI models increases vulnerability to cyber threats. Moreover, the large volumes of data exchanged make these systems prime targets for exploitation.
AI systems are also frequently connected to the internet, providing additional entry points for attackers and heightening the risk to sensitive information. They are susceptible to disruptions in digital infrastructure, such as cloud services and data centres, as well as connectivity issues.
Singapore is actively addressing these challenges. Earlier this year, the country launched a S$20 million initiative aimed at developing tools to detect deepfakes and misinformation. The Singapore Cyber Landscape 2023 report highlights the Infocomm Media Development Authority (IMDA) launching the AI Verify Foundation, which taps into global open-source expertise to enhance AI testing capabilities and ensure the trustworthiness of AI systems.
Additionally, Singapore's Smart Nation 2.0 initiative aims to deepen AI integration into daily life while securing digital infrastructure and data privacy. The Model AI Governance Framework for Generative AI, announced in 2024, provides best practices for managing risks associated with AI, further aligning with the broader goals of Smart Nation 2.0.
Singapore has been proactive in its approach to AI since 2019, when it introduced its national AI strategy. Last December (2023), the country updated its efforts with the National AI Strategy 2.0, aiming to harness AI for public benefit while safeguarding against misuse.
In February of this year, Singapore committed to investing approximately S$1 billion (around US$743 million) over the next five years to further enhance its AI capabilities.
However, the global nature of AI and cyber threats necessitates international cooperation. Singapore’s efforts, while commendable, must be complemented by global standards and collaborations to effectively mitigate the risks associated with AI.
Countries need to work together to establish international norms for AI governance, share threat intelligence, and develop joint strategies to combat AI-driven cyber threats.
As GenAI transforms our world, we must address not only its security but also its ethical and social impacts. This includes ensuring AI is used responsibly and transparently, with mechanisms in place to hold developers and users accountable for misuse.
The goal is to protect and harness its potential wisely, ensuring that we secure both the technology and the future it will shape.