As AI adoption rapidly accelerates throughout Singapore, organisations are facing significant challenges in managing and governing associated risks, according to a recent survey conducted by Okta.
An Okta poll in November 2025 highlight a disconcerting gap between the deployment of AI technologies and the maturity of the governance and identity controls necessary for effective risk management.
The poll revealed that while 53% of respondents believe the responsibility for AI security risk lies with the Chief Information Security Officer (CISO) or security function, a worrying 25% reported that no single individual or department currently owns AI risk within their organisation. This lack of accountability can lead to heightened vulnerabilities as AI becomes increasingly integrated into operations.

“The speed at which Singapore organisations are adopting AI reflects a growing maturity in how the technology is leveraged,” stated Stephanie Barnett, vice president for Asia Pacific & Japan at Okta. “However, the next step is to evolve governance and security measures to keep pace with this transformative technology.”
The survey uncovered additional startling insights, including limited visibility into AI behaviour. Only 31% of respondents expressed confidence in their ability to detect if an AI agent operates outside its intended scope, while 33% do not monitor AI agent activity at all. These figures highlight the urgent need for improved monitoring and accountability within organisations.
Data security gaps remain palpable, with data leakage through integrations cited as the top risk (36%), followed by the challenge of Shadow AI—unapproved and unmonitored tools—identified by 33% of respondents.
Alarmingly, only 8% reported that their identity systems are fully equipped to secure non-human identities, such as AI agents and bots, with 58% describing their capabilities as only partially adequate.
Moreover, while 50% of boards acknowledge the existence of AI-related risks, only 31% reported full board engagement in oversight, underscoring a critical disconnect in governance.
“As AI becomes more embedded across workflows, organisations must approach AI security with the same rigour applied to securing human identities,” Barnett added. “When identity systems are robust, trust is established, enabling innovation to scale safely and sustainably.”
These findings emphasise the urgent need for better governance frameworks, enhanced accountability, and modern identity systems capable of managing both human and non-human identities in an increasingly digital landscape.
