In 2025 and 2026, Chief Information Security Officers (CISOs) across Asia are at the crossroads of innovation and risk. As generative AI reshapes the cybersecurity landscape, strategies must evolve to address emerging risks such as prompt injection and data poisoning.
By prioritising transparency, compliance, and a security-first culture, CISOs can navigate the complexities of AI integration and enhance organisational resilience.
GenAI’s unique vulnerabilities
Generative AI is no longer a novelty; it has become a business imperative. However, as Reinhart Hansen, director of Technology and field CTO APJ at Thales, explains, “There are many vulnerabilities that cover the end-to-end AI stack. Some of the most prominent ones are all the initial ones that many people are aware of, which comes because of their usage of the prompting applications.”
Prompt injection—where attackers craft inputs to manipulate model outputs or bypass guardrails—has become a sophisticated threat. Hansen elaborates:

“Prompt injection is really using a prompt to manipulate how you want the large language model behind it to respond...using prompt engineering and crafting that prompt specifically to bypass guard rails can sometimes also return results about the training data that’s behind training the AI model that you’re interacting with.” Reinhart Hansen
“That can be very dangerous because obviously it can raise a whole lot of regulatory and privacy concerns,” warns Hansen.
These risks are compounded by classic application security issues, such as those detailed in the OWASP API Top Ten, but now require AI-specific countermeasures.
Agentic AI and data poisoning trends
According to Gartner’s 2025 predictions, the rise of “agentic AI”—autonomous agents powered by large language models—will increase the number of attack surfaces. Data poisoning, where training data is subtly corrupted to influence AI behaviour, is also on the rise. These developments underscore the need for layered adaptive defences.
Shifting the focus to data
Traditional security tools—such as network monitoring, firewalls, and endpoint protection—are no longer sufficient. Hansen emphasises a critical gap: “We’re just not monitoring very closely enough or granularly enough access to data...with AI that just becomes a tenfold more important thing to do because AI is all about the data.”
He advocates for data-centric monitoring, ensuring visibility into every data access event within the AI pipeline. This aligns with IDC’s 2025 guidance, which highlights the need for “AI-specific security gateways” that inspect both prompts and responses, acting as dynamic guardrails.
Navigating a moving target
Regulatory frameworks are evolving rapidly, particularly in Asia. The ASEAN AI Governance and Ethics Guidelines set a baseline, but the pace of technological change often outstrips regulatory agility. Hansen cautions:
“It’s a difficult one because you don’t want to have too much regulation in place so that it stifles the usage and innovation around AI...the landscape is changing that quickly when it comes to the progress of AI, that what they’re suggesting may be irrelevant or just not effective anymore.”
Instead, he recommends a focus on education and ethical usage policies tailored to each organisation’s risk tolerance. This mirrors global best practices, such as Dubai’s AI adoption strategy, which prioritises responsible innovation over rigid controls.
Third-Party AI risks
Reliance on third-party AI services, especially hosted large language models, introduces supply chain risks. Hansen notes:
“If you’re relying on that from a business perspective, you can’t afford to be down for 10 hours without these systems being effective...a multi-vendor approach, like everything we do, when we want to be more resilient. We need to look at not relying on the one vendor, the one technology provider.” Reinhart Hansen
He urges organisations to sanitise data before sending it to cloud-based AI platforms and to deploy security gateways that enforce organisational guardrails, not just those offered by vendors. This approach is echoed in Accenture’s 2025 Cyber Resilience Report.
Balancing innovation and risk
The pressure to innovate is immense, but speed must not come at the expense of security. Hansen warns: “Some organisations sacrifice security for speed. That’s dangerous.”
A risk-based approach is essential, embedding security and privacy by design into every AI initiative. McKinsey’s 2025 CISO priorities reinforce this.
Building a security-first culture
Technology alone is insufficient. Hansen stresses: “Security needs to be part of every developer’s mindset. AI brings new threats, and we must adapt our third-party risk programmes accordingly.”
Continuous training, cross-functional collaboration, and clear policies are vital to ensure that innovation does not outpace risk management.
Investment priorities
For proactive AI security, Hansen recommends a shift in investment: “We spend too much on perimeter security and not enough on securing data itself.”
Organisations must map their data footprint, apply encryption and tokenisation, and implement robust monitoring throughout the data lifecycle. Deloitte’s 2025 Security Investment Outlook supports this.
Top 3 recommendations for CISOs in Asia
- Establish an AI centre of excellence: Create dedicated teams to drive responsible AI adoption and integrate security into development from the outset.
- Monitor data access granularly: Implement robust monitoring to detect anomalous or inappropriate data access in AI workflows.
- Sanitise data before cloud processing: Ensure sensitive data is protected before being sent to third-party AI platforms.
In the future
As Asia accelerates its AI journey, CISOs must lead with foresight and agility. The convergence of AI and cybersecurity presents unprecedented risks—but also immense opportunities for those who embrace a proactive, security-first strategy.
As Reinhart Hansen concludes: “Don’t fear AI. Use it wisely, securely, and ethically. It has the power to transform us—if we manage it well.”
Click on the PodChats player and hear Hansen’s recommendations on how to navigate the AI imperative from the security perspective.
- How can we identify and mitigate the unique vulnerabilities introduced by generative AI, such as prompt injection and data poisoning?
- What specialised AI security monitoring and detection tools should we deploy to reduce breach detection and containment times?
- How can we ensure transparency and compliance with evolving AI regulations, such as the ASEAN Guide on AI Governance and Ethics?
- How can we secure AI systems (LLM factories) provided by third-party vendors and manage supply chain risks related to AI data access and processing?
- How do we balance the business imperative to adopt AI-driven innovation with the need to defend against increasingly sophisticated AI-powered cyber threats?
- How can we foster a security-first culture that promotes responsible AI use and equips security teams with the necessary AI expertise?
- What investment priorities and cross-functional collaborations are essential to build a comprehensive and proactive AI security programme that aligns with business objectives?
- Name the top three recommendations that CISOs and their organisations should take as they lead their organisations into a more AI-immersed operating environment?