As we enter a new era defined by rapid advancements in generative AI, the landscape of cybersecurity is evolving at an unprecedented pace. These powerful tools, capable of creating content and automating tasks, present both opportunities and significant risks for organisations across Asia.
With the potential for misuse in generating deepfakes, phishing attacks, and automated malware, cybersecurity leaders must prioritise robust strategies to safeguard their digital environments. Emphasising a proactive approach that includes continuous monitoring, employee training, and the implementation of AI-driven security solutions will be crucial. In this dynamic environment, fostering collaboration between technology developers and cybersecurity professionals will ensure that the benefits of generative AI are harnessed responsibly, allowing organisations to thrive while minimising vulnerabilities.
Asked if any one region in the world is more prone to the potential risks associated with the use of generative AI (GenAI) Terry Ray, Data Security CTO and fellow at Imperva, believes that all are on equal footing when it comes to risks associated with the new technology. He reasons that the technology is still new and that users (globally) are all learning and growing with the technology, and perhaps experiencing some of the negative things that are going to happen.
When prodded to share his familiarity with best practices for assessing the effectiveness of existing security measures against AI-driven attacks, he opined that for most current uses of AI, like content creation using ChatGPT, the most important aspect is securing interactions with AI, especially when handling sensitive organisational data.
For him, a best practice is to treat AI systems as critical applications, where robust security is necessary. He goes on to suggest that traditional security practices must be applied to AI to safeguard sensitive data effectively, acknowledging potential unknown vulnerabilities in AI systems. “Key considerations include understanding the data the AI can access, its vulnerabilities (like SQL injection), and ensuring it only accesses the necessary information for individual users,” he continues.
Regulations and compliance requirements in Asia
It is understood that some of the more mature markets in Asia, including Hong Kong and Singapore, are formulating guidelines and frameworks on the use of AI technologies and the protection of personal data as organisations and consumers start adopting these technologies.
In February 2024, Singapore also unveiled a new draft framework specifically targeted at generative AI. The concerns that the framework seeks to address include hallucinations, copyright infringement and value alignment.
In addition to the release of an Advisory Guidelines on the Use of Personal Data for AI in mid-2023, published, Singapore published in February 2024, a new draft framework specifically targeted at generative AI. The concerns that the framework seeks to address include hallucinations, copyright infringement and value alignment, wrote law firm, RPC.
In August 2023, Hong Kong’s Office of the Government Chief Information Officer published the Ethical Artificial Intelligence Framework (Ethical AI Framework).
“I believe that upcoming regulations and compliance requirements will primarily focus on intellectual property issues, particularly regarding the misuse of information, such as creating deep fakes. This change will come first as the potential for IP abuse like deep fakes to impact individuals politically, financially and religiously is immense.” Terry Ray
Key challenges with using AI as a cybersecurity tool
Ray points out that many organisations are navigating a complex relationship with AI. He posits that while they implement strict policies on the use of public AI models and data, they're also exploring how to harness AI for innovation and efficiency.
“This pits caution about widespread AI use against the desire to leverage its benefits,” he opines. “In mid-sized and large enterprises, there is typically a dedicated approval team that encourages employees to discuss their intended AI use. Rather than an outright ban, organisations ask employees to communicate their plans, fostering a collaborative approach to determine the best path forward. This helps balance innovation with oversight.”
Options, options, options
As with all new emerging technologies, there remains optimism that ways will be discovered that will give organisations the ability to ensure the integrity of data used to train generative AI models.
Ray says there are quarters that believe all data on the internet is public and should be freely usable. However, the real challenge lies in verifying the accuracy and quality of that data.
“If businesses cannot trust the data generated by AI — leading to inaccuracies or hallucinations — they risk losing its value entirely. I've used AI to create images, but I find it often struggle with the details. While it may be "good enough" for casual use, businesses require a much higher level of accuracy than “good enough” to function effectively.” Terry Ray
Role of GenAI in incident response (IR) planning
According to Ray, AI excels at analysing data and can play an important role in incident response, but organisations must first have robust datasets to leverage its capabilities. The catch-22, however, is that many have not collected enough information, and mistakenly think they can simply use AI to speed up their response time.
“Going forward, we can expect all security tools to incorporate AI features. This could streamline processes significantly, but organisations must still remember they must have the right data in place before they can utilise AI effectively for incident response,” he advises.
IR case study
Asked what incident response case studies exist that highlight GenAI threats, and what can be learned from them, Ray says the most significant threats from GenAI include its use in enhancing phishing attacks and modifying malware.
“A few years ago, phishing emails were often poorly crafted and easily identifiable. Now, with AI, grammatically correct phishing messages in multiple languages can bypass existing detection tools, accelerating what was once a manual process,” he comments.
“Bad bot traffic is another issue. Anti-bot measures are being forced to evolve today, as AI-driven bots can engage with websites in ways that resemble genuine human interactions, complicating the task of safeguarding online environments,” he continues.
Advise when deploying generative AI in security operations
Ray says the issue ultimately revolves around data security. “Ethically, CIOs and CISOs should have a clear map of what the AI connects to in the backend,” he starts. He also thinks organisations need to remember that AI is essentially just another application and can be vulnerable, particularly since it often includes third-party code with its dependencies.
“Organisations must ensure they have both front-end controls and backend guardrails in place to manage how the AI interacts with sensitive data. Trusting AI to use data ethically is vital, as it will ultimately have access to that information.” Terry Ray
Impact of GenAI on budgets and resource allocation
As the new technology favourite among both business and technology leaders, AI will inadvertently raise concerns about the threat of AI. Globally, Gartner expects information security end-user spending to reach US$212 billion in 2025, up 15.1% from 2024’s US$183.9 billion.
The analyst says a key factor in this growth is the continued adoption of GenAI tools that are boosting investments in security software markets.
Ray believes that a savvy CISO or CIO can leverage the need for AI controls to secure more budget. “While AI presents unknowns, the same controls applicable to AI can also be extended to other areas of data and applications. This broader application means that investments in AI-related security can enhance overall cybersecurity measures.
“Ultimately, AI can act as a catalyst for increased funding in cybersecurity, like the financial responses seen after a security breach. Organisations should seize this window of opportunity to secure the necessary resources to strengthen their cybersecurity posture,” he continues.
Predictions in 2025
Ray predicts that 2025 will see a significant increase in the adoption of AI by cybersecurity vendors. Over the past six months to a year, many vendors have been discussing potential AI applications, but by 2025, we’ll have clearer answers on how AI will be used, its effectiveness, and its impact on incident response.
“This year will be pivotal as products with AI capabilities become available, allowing us to assess their true utility. We’ll find out if these AI solutions genuinely enhance efficiency and save costs, or if they become just another underutilised tool.” Terry Ray
Click on the PodChats player and listen to Ray’s perspectives on how to secure the new frontier with generative AI in 2025 and beyond.
- We’ve heard snippets of warnings. Perhaps you can elaborate more on the potential risks associated with the misuse of generative AI among organisations in Asia?
- Can you share one or two best practices for assessing the effectiveness of existing security measures against AI-driven attacks?
- In Asia, how do you see regulations and compliance requirements evolving concerning generative AI and data security?
- For organisations in Asia regardless of size, what remains as key challenges with the rise of AI as a cybersecurity tool to protect against cyberattacks?
- What are available options for organisations to ensure the integrity of data used to train generative AI models?
- Specific to data protection strategies, what role will generative AI play in incident response plans?
- What incident response case studies exist that highlight generative AI threats, and what can we learn from them?
- Speaking of phishing, malware and hyper-targeting, what ethical considerations should CIOs and CISOs account for when deploying generative AI in security operations?
- How will the adoption of generative AI affect organisations’ overall cybersecurity budget and resource allocation?
- Any final thoughts for AI in cybersecurity in 2025?