Delinea research highlights an intriguing paradox in Singapore's approach to AI security. While 52% of organisations report being fully equipped to secure AI technologies—surpassing the global average of 44%—a significant 62% grapple with the challenges of Shadow AI at least once a month. This statistic illustrates the highest level of concern among business leaders worldwide.
The report, "AI in Identity Security Demands a New Playbook," reveals a striking disconnect between organisations’ confidence in their ability to secure AI tools and the actual capabilities they possess.
For example, although 97% of Singaporean organisations believe their machine identity security can keep pace with AI-driven threats, only 70% claim to have comprehensive visibility of machine identities, and just 56% have established governance policies specifically for AI identities.
The pervasive use of agentic and generative AI in Singaporean businesses—91% report daily use—adds to the urgency of these concerns. Limited visibility and inadequate governance leave these AI agents subject to unchecked autonomous decisions and potential external compromise.
“Agentic AI demands agentic security,” stated Art Gilliland, CEO of Delinea. He goes on to suggest that organisations rethink how they approach identity, building adaptive, risk-aware systems that verify and secure every action, whether it’s human- or machine-driven.
Art Gilliland
”AI agents, in particular, require more granular and dynamic identity access controls than traditional role-based approaches. More broadly, every organisation must build out a comprehensive AI governance model to ensure that it’s being used securely and as intended.” Art Gilliland
Key findings from the report indicate that AI-generated phishing and deep fakes are the foremost concerns for 54% of respondents, followed closely by agentic AI systems with unchecked access (52%) and AI-driven credential theft (47%). Moreover, the lack of robust governance is alarming; only 63% of organisations have an acceptable use policy for AI tools, and 64% have access controls in place for AI agents.
The findings underscore a clear imperative for organisations to prioritise identity governance and adopt adaptive security controls to mitigate the risks associated with AI technologies. As AI continues to reshape IT and security operations, the need for vigilance and proactive governance has never been more critical.