A recent survey by Kiteworks reveals a troubling disconnect between the rapid adoption of artificial intelligence (AI) in the Asia-Pacific (APAC) region and the necessary governance controls to protect sensitive data. Conducted among 461 cybersecurity and IT professionals, the survey highlights that 27% of AI-ingested data is private, yet many organisations lack sufficient visibility and enforceable safeguards.
Despite being at the forefront of AI innovation, the technology sector exhibits a paradoxical approach to AI security. While these companies build AI platforms and sell security solutions, they demonstrate practices that are often no better—and sometimes worse—than those in less advanced industries.
Key findings from the survey include:
- Security control implementation: A staggering 83% of tech companies lack automated controls, mirroring trends in other sectors. Reliance on employee training (40%) and warnings (20%) underscores a "cobbler’s children have no shoes" scenario.
- Data Exposure Patterns: 27% of tech firms report that over 30% of AI-ingested data is private, with 17% unsure of their exposure levels. This is especially concerning given their responsibility for critical assets like source code and customer data.
- Governance Frameworks: Only 40% of respondents claim full implementation of AI governance, reflecting a disconnect between their advisory roles and internal practices.
- Risk Prioritisation: Concerns about data leakage (28%), system vulnerabilities (23%), and compliance (12%) mirror industry norms, indicating a lack of sector-specific threat modelling.
- Privacy vs. Innovation: Alarmingly, 23% of tech firms operate without formal privacy controls, eroding customer trust and credibility.
The credibility gap
This disconnect raises serious questions about the credibility of technology companies. AI security vendors lack visibility and automated controls, while privacy tech firms neglect essential privacy measures. Governance advisors exhibit no better implementation than their clients, threatening to erode customer trust and reputational capital.
Innovation paradox
The sector's failure to exceed average security benchmarks indicates systemic cultural and organisational issues rather than mere technical incapacity. With 27% reporting extreme data exposure, tech firms face significant risks, including intellectual property theft and customer data loss.
Key takeaway
The technology sector’s AI security posture reflects a troubling internal security culture. Despite possessing unparalleled expertise and resources, tech companies often mirror the baseline practices of less sophisticated sectors.
"The data reveals organizations significantly overestimate their AI governance maturity," concluded Tim Freestone, chief strategy officer at Kiteworks.
"With incidents surging, zero-day attacks targeting the security infrastructure itself, and the vast majority lacking real visibility or control, the window for implementing meaningful protections is rapidly closing." Tim Freestone
To regain industry credibility and ensure robust AI protection, technology firms must lead by example, adopting the very governance frameworks they advocate for others. As AI adoption surges, the imperative for effective security measures has never been clearer.