A Kiteworks report claims alarming gaps in data security governance as organisations in the Asia-Pacific (APAC) region rush to adopt artificial intelligence (AI) technologies. The findings, based on responses from 461 cybersecurity and compliance professionals, indicate that 27% of data ingested by AI tools is private, yet most firms lack the necessary visibility and enforceable safeguards.
* editor's note: As of this posting, Kiteworks has yet to provide link to the report.
The survey highlights that only 17% of organisations have implemented technical controls that block access to public AI tools combined with data loss prevention (DLP) scanning. Alarmingly, 26% report that over 30% of the data ingested by employees using public AI tools is private.
This lack of control leaves many organisations vulnerable to data breaches, as evidenced by Stanford's 2025 AI Index Report, which recorded a 56.4% increase in AI privacy incidents year-over-year.
Kiteworks chief marketing officer, Tim Freestone, pointed out that the disconnect between AI adoption and security implementation: “When only 17% have technical blocking controls with DLP scanning, we’re witnessing systemic governance failure.”
Google’s research reinforces this concern, showing that 44% of zero-day attacks target data exchange systems, which are crucial for protecting sensitive information.
The survey also uncovers a dangerous overconfidence in AI governance readiness. While 40% of respondents claim to have a fully implemented AI governance framework, Gartner’s findings suggest that only 12% of organisations possess dedicated AI governance structures. This creates a significant risk exposure for companies that believe they are adequately prepared.
Deloitte’s research adds further context, revealing that only 9% of organisations achieve a “Ready” level of AI governance maturity despite 23% reporting they are “highly prepared.” This discrepancy indicates a severe misalignment between perceived capabilities and actual maturity, leaving organisations open to emerging threats.
In the legal sector, where data leakage concerns are highest at 31%, implementation remains weak. The survey shows that 15% of legal firms have no specific policies on public AI tool usage, while 19% rely on unmonitored warnings. With 95% of law firms expecting AI to be central to their operations within five years, this gap poses significant risks.
These findings underscore the urgent need for organisations to reassess their AI governance frameworks. Kiteworks recommends that businesses acknowledge the reality of their security posture, deploy verifiable controls, and prepare for regulatory scrutiny in an increasingly complex landscape. “The data reveals organisations significantly overestimate their AI governance maturity,” concluded Freestone.
The report serves as a wake-up call for CISO leaders in APAC, highlighting that without stronger governance controls, the rapid adoption of AI technologies could lead to severe data security risks, jeopardising both compliance and trust in the digital economy.