As the adoption of generative AI (genAI) platforms surges, cybersecurity risks are escalating, particularly for Chief Information Security Officers (CISOs) in Asia. In 2026, the focus on cybersecurity must align with the increasing reliance on AI, ensuring that security measures evolve in tandem with technological advancements.
Recent research from Netskope suggests a 50% increase in genAI platform usage among enterprises in just three months, highlighting the urgent need for enhanced security measures. With over half of current app adoption classified as “shadow AI”—unsanctioned AI applications used by employees—CISOs face a complex landscape of vulnerabilities.
The rapid integration of genAI into business processes is creating new cybersecurity challenges. Shadow AI, while enabling innovation, exposes organisations to significant risks, particularly concerning data loss and breach potential. Network traffic linked to genAI platforms has surged by 73% over the same period, indicating that as usage rises, so does the potential for exploitation.
CISOs must prioritise a comprehensive assessment of the genAI landscape within their organisations. This involves identifying which tools are in use, understanding who is leveraging them, and determining how they are being applied.
With 41% of organisations already utilising at least one genAI platform, including Azure OpenAI and Amazon Bedrock, the pressure is on security leaders to implement robust controls.
To effectively manage the risks associated with genAI, organisations must bolster their app governance policies. Establishing a framework that only permits the use of company-approved genAI applications is crucial. Implementing real-time user coaching and robust blocking mechanisms can significantly mitigate the risks associated with shadow AI.
Furthermore, as many organisations are experimenting with on-premises AI solutions, the need for local security frameworks becomes paramount. Adopting guidelines such as the OWASP Top 10 for Large Language Model Applications can provide a structured approach to securing these technologies.
Continuous monitoring is another critical component of an effective strategy. By keeping a vigilant eye on genAI usage, organisations can detect emerging shadow AI instances and adapt to new threats swiftly. This includes staying informed about regulatory changes and ethical considerations surrounding AI deployment.
AI adoption is not inevitable - it is an ongoing journey for many. To minimise the cybersecurity risks associated with adoption of emerging technologies including AI will require collaboration with employees. The goal is to foster an environment of responsible innovation.
Developing actionable policies that address the challenges posed by shadow AI will be essential in safeguarding sensitive data and maintaining trust within organisations.