Rohit Dhawan, group executive director of Artificial Intelligence at Lloyds Banking Group in the UK, wrote: Agentic AI goes beyond GenAI, enabling autonomous action, workflow orchestration, and real‑time decision-making at scale.
He goes on to predict that 2026 marks a turning point as agentic AI moves from experimentation to enterprise-wide deployment across financial services.
In this context, CISOs and CIOs in Asia should consider prioritising AI-driven identity governance for autonomous environments, treating agentic AI as first-class identities that require least-privilege enforcement, continuous behavioural monitoring, lifecycle visibility, and human-in-the-loop controls.
A maturing understanding of regulations will drive compliance efforts to mitigate shadow agents, rogue actions, excessive privileges, and accountability gaps in securing enterprise IT infrastructure.
Evaluating AI readiness
Across Asia-Pacific (APJ), AI adoption is surging, yet identity and access management (IAM) systems lag dangerously behind. Okta's October-November 2025 poll of 435 senior IT and cybersecurity professionals in Australia, Singapore, and Japan revealed that fewer than 10% of organisations consider their IAM fully equipped to secure AI agents, bots, and service accounts. Shadow AI emerged as the top security concern in Australia (35%) and Singapore (33%).
In an exclusive with FutureCISO, Matthew Graham, chief security officer for Asia Pacific at Okta, highlights the core vulnerability: "The test for a CISO is whether their system distinguishes between a user or an agent acting on behalf of that user? And if the logs show only that user, then there's an identity spoofing vulnerability, for example. Or can you revoke an agent's access without revoking the human's access that made that?"
Traditional IAM, designed for nine-to-five human logins, fails in autonomous environments where agents spin up, act in milliseconds, and tear down.
The World Economic Forum notes that by the end of 2025, more than 45 billion non-human and agentic identities—over 12 times the global human workforce—will operate in organisational workflows, with only 10% of organisations possessing a well-developed management strategy.
In APJ, where IDC forecasts that 70% of organisations expect agentic AI to disrupt business models within 18 months, this gap risks catastrophic exposure.
Singapore: Adopting accountability and transparency first
Singapore's Model AI Governance Framework for Agentic AI (MGF), launched by the Infocomm Media Development Authority (IMDA) on 22 January 2026, offers the region's most pragmatic blueprint. It organises governance around four dimensions: assessing and bounding risks upfront, ensuring meaningful human accountability, implementing technical controls, and enabling end-user responsibility.
Commenting on the effort of the nation state, Graham praises its practicality:

"It's not a theoretical type of framework. And there are two big pillars that I see: One, you've got your accountability, and two, you've got your transparency." Matthew Graham
From an identity perspective, the framework demands unique, traceable agent identities linked to a human owner or system manager, fine-grained, dynamic permissions that adhere to the principle of least privilege, and continuous logging for auditability.
"You can't really afford to have any sort of anonymous intelligence or an agent running in your environment in the event of a hallucination," Graham warns, noting agents' access to sensitive data in general ledgers or HR systems.
Enterprises in Singapore and early adopters across APJ are prioritising these principles to align with data-residency rules in India, Indonesia, China, and Australia, where sovereign AI infrastructure demands localised processing.
Zero trust at machine speed
Shadow AI mirrors the shadow IT of decades past, but at unprecedented speed. Developers, pressured by board expectations for rapid AI-driven efficiencies, often hard-code credentials, bypassing reviews.
Graham observes: "Shadow AI is a new, very fast form of shadow IT." He noted that under pressure from business, "developers are going to be hard-coding in credentials into an agent to get it working quickly."
The solution lies in zero-trust principles applied at machine speed. Agents receive zero access by default and gain just-in-time, short-lived tokens via centralised identity providers.
"An agent should have zero access by default. It effectively doesn't exist in your environment until it actually has that job to do," Graham explains, drawing parallels to just-in-time access models.
Okta's poll underscores the urgency: in Japan, data leakage ranks highest (36%), while detection confidence for out-of-scope agent actions remains low (only 8% in Japan feel confident).
Treat agents like employees
Basic steps begin with a mindset shift. Graham urges, "We need to be treating these agents just like their employees. By policy and technical controls, we don't allow five employees to share a single ID badge. We shouldn't be letting agents share things like API keys."
Four foundational steps include: unique registration via standardised protocols; explicit ownership linked to a human manager (pausing agents if the owner departs); ruthless application of OAuth scopes; and full audit trails explaining deployment, boundaries, and actions in plain English.
This aligns directly with Singapore's MGF requirement for traceability and the IMDA's call for agents to hold identities "tied to a human/supervising agent/organisation for traceability."
Short-lived identities with automated expiry
Organisations practise poor hygiene in their information technology and cybersecurity practices. In data management, as much as 33% of data stored in an organisation is ROT (Redundant, Obsolete, and Trivial) – meaning data that is useless and should have been deleted. This includes data on access rights for both humans and machines.
In the context of agentic AI, Graham advocates: "Killing the idea of having permanent access for an agent." He suggests, for example, applying an automated lifecycle management system within DevOps' build cycle, "where an agent is built, it's deployed, it's doing the things that it needs to be able to do. It only has the access that it needs to have, but then it has that hard-coded expiration date."
He further advocates governance sitting above DevOps automation. This ensures identities exist only for the required duration. When features are deprecated or code is updated, revocation follows automatically. Centralised policy-as-code enforces this across on-premises, AWS, Azure, and hybrid environments.
Centralised control planes versus headcount
Many organisations lack the maturity for these controls. Graham cautions against simply adding security engineers: "You can't scale governance by adding more and more people. You've got to be scaling more with smart architectural decisions."
A centralised automated control plane—defining policy once as code and enforcing globally—allows small teams to manage vast ecosystems. "That's the true way to scale… they're managing the logic… they write the code that approves that access."
This approach keeps costs predictable and governance manageable, as highlighted in Okta's identity maturity guidance.
Circuit breakers without operational drag
Predictability is the lifeblood of many business processes. It ensures consistency, builds trust and enables effective planning for long-term success. This is also true in engineering and information technology.
Agentic AI's predictability enables effective monitoring. "You know what it's going to be doing, and you know what it should be doing," Graham states.
He explains that baselines trigger circuit breakers: an agent querying 50 records per minute in customer support should not suddenly hit 5,000 records per minute. Policy-driven emergency stops contain blast radius while empowering developers with a safety net.
Singapore's MGF reinforces continuous monitoring, anomaly detection, and real-time escalation. Hardcoded into the framework is the importance of human accountability.
Identity's regulatory role in sovereign AI and standards
An article in The Straits Times poses the question: "Is Asia's sovereign AI push an exercise in futility?"
Graham suggests that Asia's sovereign AI push—national infrastructure in India, Indonesia, China, and Australia—introduces data-residency complexities. He foresees: "Identity is going to have a strong piece to play there. He posits that if a user is identified as being in Singapore, their request will be routed to a Singapore-hosted model. "This keeps the data all within Singapore."
New standards will demand adaptive routing and culturally attuned behaviours, with Identity as the orchestrator.
Resilience recommendations for 2026's IT project of the year
Gartner predicts up to 40% of enterprise applications will include integrated task-specific agents by 2026, up from less than 5% in 2025. This suggests that Agentic AI is poised to be 2026's defining IT project.
In this regard, Graham identifies three organisational types: rapid adopters, cautious observers, and unaware users. Normalisation will occur as governance matures, mimicking cloud security's evolution.
For deployers: audit environments immediately, treat agents as first-class identities, enforce least privilege and short lifecycles, centralise policy, and implement behavioural monitoring. Commenting on concerns around burnout, he warns:
"If they don't do it now, like they're going to be answering questions to their board essentially." Matthew Graham
CISOs must act in months, not years, without burnout—leveraging automation to stay ahead.
Click on the PodChats player to listen in detail to Graham's reflection on AI-driven identity governance in autonomous environments.
