A new global study from OpenText and the Ponemon Institute warns that enterprises are rapidly deploying generative AI and agentic systems without the governance, data controls and security posture required to scale safely — a finding that should resonate strongly with CISOs across Asia.
“AI maturity isn’t just about adopting AI tools — it’s about doing it responsibly,” said Muhi Majzoub, EVP, product & engineering at OpenText. “Security and governance are foundational to getting real value from AI. When they’re built into AI systems from the start, organisations can operate with greater transparency, monitor systems continuously, and trust the outcomes AI delivers.”
Key findings underline the gap: 52% of organisations have fully or partially deployed GenAI, yet only 20% report reaching AI maturity where AI is fully embedded in cybersecurity and related risks are routinely assessed. Nearly eight in ten respondents (79%) say they have not achieved full AI maturity in security; only 41% have AI‑specific data privacy policies; and 43% have adopted a risk‑based governance approach.
For CISOs in Asia — where enterprises juggle hybrid cloud estates, complex supply chains and diverse regulatory regimes — the research highlights immediate priorities.
The study flags persistent difficulties in minimising prompt‑ and model‑related risks: 58% say prompt/input risks are very or extremely difficult to control, and 62% report strong challenges in reducing model bias.
Those issues directly affect detection fidelity and decision reliability: only 51% of respondents rate AI as effective at reducing anomaly‑detection time, and fewer than half (48%) consider AI effective for deep threat hunting.
Operationally, the report finds errors in AI decision rules (45%) and faulty input data (40%) are top barriers to effectiveness. Consequently, fewer than half (47%) believe their models can learn robust norms and make safe autonomous decisions, and 51% say human oversight remains necessary because attackers adapt quickly.
The study’s practical implications for 2026 CISOs in Asia:
- Prioritise AI governance now: formalise risk‑based frameworks addressing bias, privacy, security and explainability before scaling models.
- Enforce AI‑specific data policies and data‑quality programmes to reduce “AI data debt” and input errors.
- Build continuous monitoring and human‑in‑the‑loop controls where autonomy cannot yet be trusted.
- Align procurement and vendor risk processes to include AI safety and compliance clauses, particularly for cross‑border data flows.
- Integrate explainability and incident playbooks into SOC workflows to accelerate triage of AI‑related incidents.
Majzoub emphasised that leaders will be those who “build transparency and control into AI from the start,” tying governance to information management, policy‑based controls and continuous monitoring.
For CISOs tasked with securing AI at scale, the message is clear: rapid adoption without foundational controls will increase operational risk, regulatory exposure and the burden on security teams across the region.
