The rapid proliferation of generative artificial intelligence (AI) tools across Asia-Pacific has transformed organisational workflows, but it has also unleashed a new class of security and compliance risks.
As enterprises in the region race to harness AI's productivity gains, the phenomenon of "shadow AI"—the unsanctioned use of AI tools by employees—has become a top concern for CISOs and CIOs.
In 2025 and beyond, securing shadow AI will require a strategic, unified approach, with Secure Access Service Edge (SASE) platforms emerging as a linchpin for risk mitigation, visibility, and regulatory compliance.
The rise of shadow AI in Asia
The Asia-Pacific region is experiencing an unprecedented surge in AI adoption, with IDC forecasting that AI spending in Asia-Pacific will reach US$78.4 billion by 2027, growing at a CAGR of 25.5%. This growth is not limited to sanctioned, enterprise-approved solutions.
Employees, driven by the promise of efficiency, are increasingly turning to public generative AI platforms—such as ChatGPT, DeepSeek, and Gemini—for tasks ranging from code generation to document summarisation.
Aditya K Sood, VP of security engineering and AI Strategy at Aryaka, observes:
Aditya K Sood
"The explosive growth of Shadow AI in Asia-Pacific over the past year has significantly elevated organisational risk profiles, primarily by data leakage, compliance exposure, and an expanded attack surface."
Shadow AI introduces risks that are both technical and regulatory in nature. Employees may inadvertently expose proprietary data, intellectual property, or sensitive personal information to external AI platforms, creating uncontrollable data exfiltration channels.
As more countries in Asia enforce data localisation and cross-border transfer restrictions, the use of AI tools hosted outside national jurisdictions is becoming a compliance minefield.
Analysts predict that by 2026, 70% of organisations will be subject to multiple, often conflicting, data sovereignty requirements, up from 30% in 2023.
This trend is driving demand for AI governance frameworks and solutions that can enforce data residency and provide auditability across borders.
"This unauthorised data transfer frequently breaches stringent and evolving data residency and privacy laws… leading to significant fines and legal liabilities as organisations lose visibility over where their data is processed and stored," reminds Sood.
Evolving threats: Deepfakes and AI-driven attacks
The attack surface is expanding as employees introduce unapproved AI tools, including open-source models and browser extensions, into the enterprise environment. These tools often operate below the radar of traditional security controls.
In 2025, deepfake-driven social engineering and targeted attacks are expected to surge, with AI-generated phishing campaigns becoming more sophisticated and more complex to detect.
Sood warns that the adoption of unapproved AI tools introduces new, unmonitored API endpoints and data flows, thereby expanding the attack surface and increasing the risk of deepfake-driven social engineering and targeted attacks.
"The need for visibility and control strategies to mitigate these risks cannot be overstated, as malicious actors can exploit the very AI tools meant for productivity," he stresses.
Organisations must conduct jurisdiction-specific risk assessments for every AI deployment.
"National variations mean a patchwork of requirements, forcing organisations to adopt flexible, 'governance-by-design' approaches and often conduct jurisdiction-specific risk assessments for every AI deployment to avoid severe penalties and reputational damage." Aditya K Sood
Endpoint security and shadow AI detection
While endpoint detection and response (EDR/XDR) tools are improving, they cannot often inspect the actual content being fed into AI models.
According to Sood, current endpoint security tools (EDR/XDR) are becoming increasingly effective at detecting unauthorised AI activity through behavioural analysis, flagging suspicious data flows or process anomalies associated with AI tool usage.
"However, they often lack a deep understanding of the actual content or prompts being fed into AI models, which limits their ability to assess data leakage risks from sensitive inputs fully," he warns.
He adds: "These solutions might detect a user accessing ChatGPT but not what sensitive data the user is inputting, nor can they inherently assess the risk profile of the AI model itself… The rapid emergence of new AI tools and local open-source models also creates evolving blind spots."
The push for AI literacy and responsible use
To avoid stifling innovation, leading organisations are investing in AI literacy and fostering a culture of responsible AI use. This includes establishing cross-functional AI committees, promoting transparent and explainable AI, and providing secure internal sandboxes for experimentation and testing.
As Sood highlights, "Organisations must establish a clear, adaptive AI governance framework that emphasises transparency and enablement over blanket bans.
"This involves creating a centralised 'AI App Store' or an approved list of vetted tools, providing secure internal sandboxes for experimentation with non-sensitive data, and implementing data loss prevention (DLP) and network monitoring solutions to detect unauthorised AI usage and prevent the leakage of sensitive information," he asserts.
He continues: "Crucially, fostering a culture of AI literacy through training, encouraging open communication, and involving employees in policy development helps drive responsible adoption of AI."
Addressing shadow AI with a unified SASE approach
Secure Access Service Edge (SASE) platforms consolidate network and security functions into a unified, cloud-delivered service.
For shadow AI, SASE offers centralised visibility and granular control over all network and cloud activity, eliminating blind spots and enabling real-time enforcement of security policies.
Sood highlights the advantages: "Unified SASE tackles Shadow AI's unique risks by offering centralised visibility and control over all network and cloud activity.
"Unlike scattered traditional security tools, SASE eliminates blind spots, instantly detecting unauthorised AI tool usage. Its integrated Zero Trust Network Access (ZTNA) and Cloud Access Security Broker (CASB) capabilities enforce granular policies, preventing sensitive data from reaching unapproved AI, thus stopping data leakage and ensuring compliance.
"By consolidating security functions, SASE simplifies management, enabling quicker responses to emerging AI threats," he pontificates.
Key SASE features for shadow AI defence
When evaluating SASE platforms, Sood recommends that CISOs and CIOs prioritise three critical capabilities.
First, he suggests integrating CASB & DLP with real-time content inspection. He explains that this provides visibility and is essential for discovering and classifying unauthorised AI tools in use across the network and cloud, coupled with the ability to perform real-time content inspection to prevent sensitive data (e.g., proprietary code, customer PII) from being inputted into, processed by, or exfiltrated via unauthorised generative AI services.
Second, he stresses the importance of AI-powered behavioural analytics, observability, and anomaly detection. He posits that beyond simple signature matching, the platform must leverage advanced AI/ML to identify anomalous user or application activity patterns specific to Shadow AI.
"This includes flagging unusual data uploads to previously unknown or unapproved AI applications, unexpected access to large datasets followed by external connections, or deviations from typical employee usage of AI tools, enabling proactive threat identification," he explains.
Third, Sood advocates for context-aware ZTNA and micro-segmentation, explaining that ZTNA ensures that all access requests, even from seemingly legitimate users or devices, are continuously verified against comprehensive context (user identity, device posture, location, and risk score) before granting access to internal resources.
"This, combined with micro-segmentation, drastically limits the blast radius if any system running critical AI services is compromised and used for lateral movement or if deepfake-driven impersonation attempts aim to gain unauthorised access to critical systems," he asserts.
Implementing best practices
To effectively govern shadow AI while supporting innovation, organisations should adopt a balanced strategy. Establishing a centralised "AI App Store" with pre-vetted tools ensures employees have safe, compliant alternatives.
Secure sandboxes enable experimentation with non-sensitive data, thereby reducing risk while fostering innovation. Continuous monitoring, supported by DLP and network inspection, allows early detection of unauthorised AI use.
As Sood concludes: "CISOs and CIOs (can) foster responsible AI by unifying governance and creating cross-functional AI committees with clear policies on ethics, security, and compliance from the outset."
"They (can) push for transparent, explainable AI (XAI), robust audit trails, and continuous monitoring to build trust and accountability. Investing in AI literacy and specialised security training across all teams is crucial for understanding risks associated with 'shadow AI.'" Aditya K Sood
The future of shadow AI security
By 2026, the convergence of AI adoption, regulatory pressure, and evolving threats will make unified SASE platforms indispensable for enterprises in Asia. The ability to balance innovation with security, compliance, and employee empowerment will define the region's digital leaders.
As Sood affirms, the path forward lies not in restriction but in intelligent, integrated, and inclusive governance, where security and innovation advance in tandem.
Allan is Group Editor-in-Chief for CXOCIETY writing for FutureIoT, FutureCIO and FutureCFO. He supports content marketing engagements for CXOCIETY clients, as well as moderates senior-level discussions and speaks at events.
Previous Roles
He served as Group Editor-in-Chief for Questex Asia concurrent to the Regional Content and Strategy Director role.
He was the Director of Technology Practice at Hill+Knowlton in Hong Kong and Director of Client Services at EBA Communications.
He also served as Marketing Director for Asia at Hitachi Data Systems and served as Country Sales Manager for HDS’ Philippines. Other sales roles include Encore Computer and First International Computer.
He was a Senior Industry Analyst at Dataquest (Gartner Group) covering IT Professional Services for Asia-Pacific.
He moved to Hong Kong as a Network Specialist and later MIS Manager at Imagineering/Tech Pacific.
He holds a Bachelor of Science in Electronics and Communications Engineering degree and is a certified PICK programmer.