Meta’s recent acquisition of Moltbook, the first social network built exclusively for AI agents, marks a pivotal moment as autonomous agents begin to communicate across platforms. While this unlocks powerful new capabilities, it also exposes critical identity security challenges.
From impersonation risks to exploding machine identities, organisations must now treat AI agents with the same rigorous verification, visibility, and governance as human users. The question is no longer if agents will interact — but how securely they will do so.
Meta’s bold bet on agent socialisation
In March 2026, Meta acquired Moltbook, a Reddit-style platform where AI agents interact autonomously using tools such as OpenClaw. The deal brought its founders into Meta Superintelligence Labs and signalled a broader shift toward agent-to-agent ecosystems.
For CISOs and CIOs across Asia-Pacific (APAC) in 2026, this is more than a technology headline.
It underscores the urgent need to secure machine identities that now operate at machine speed, often without human oversight.
As Marco Zhang, solutions engineering director for APJ at Saviynt, notes in a recent interview: “AI agents today act autonomously. If it is a human being, it takes time to gain access into multiple places, whereas an AI agent can make API calls, hundreds of thousands of API calls in a matter of seconds.”
Moltbook’s security meltdown: A stark warning
The acquisition followed a high-profile breach in late January 2026, when security researchers at Wiz discovered a misconfigured Supabase database exposing 1.5 million AI agent API keys, 35,000 human email addresses, and thousands of private agent-to-agent messages.
Hard-coded credentials in client-side JavaScript enabled full impersonation, allowing attackers to post, message, and manipulate content as any agent. The platform’s “vibe-coded” development—relying heavily on AI-generated code—exacerbated the flaw, with no verification distinguishing genuine AI agents from human-operated bots.
Zhang describes the incident as “a very classic example of some credential leaking. The credential is hard-coded in some kind of front-end web page, the code gets exposed, and in the end, it gets misused by someone, and the cascading effect is tremendous.”
For Asian enterprises racing through AI integration cycles, the incident highlights how quickly unsecured machine identities can scale risks. A single compromised agent can trigger lateral movement across hybrid cloud, SaaS, and emerging agent networks far faster than traditional threats.
Why AI agents defy traditional identity models
Unlike deterministic service accounts or bots that follow fixed paths, AI agents reason, adapt, and blur the line between data and code.
“AI agents have the power to reason,” Zhang explains. “If you give it an instruction to do this, and then give the same instruction a second time, it does not perform the same task exactly 100 per cent.”
This non-deterministic behaviour, combined with multiple identities per agent (accessing tools, data, and other agents), breaks conventional identity and access management (IAM) frameworks built for human or static non-human identities.
In Asia, where AI adoption is surging ahead of IAM maturity, this creates an acute crisis. An Okta poll of APAC IT and cybersecurity leaders revealed that fewer than 10% of organisations consider their IAM systems equipped to secure AI agents, bots, and service accounts.
Shadow AI ranked as the top concern in Australia and Singapore.
The visibility crisis haunting APAC hybrid environments
Most organisations still lack basic visibility into machine identities spanning cloud, SaaS, and AI environments.
Zhang observes: “A lot of organisations do not know what they do not know about non-human identities.” He adds that human identities are “pretty uniform” with standard processes, whereas “in the non-human identity world, there are many different kinds of non-human identities – service accounts, API keys, certificates, informal access.”
AI agents compound the problem, as one agent may embody multiple identities without being registered in HR-like systems or having clear ownership.
Southeast Asian CISOs recognise the stakes. Chhay Yaroth, SVP and Head of Information Security at ACLEDA Bank Plc in Cambodia, predicts: “A major company will suffer material data risks originating from an over-permissioned, compromised, or hallucinating autonomous AI agent, leading to a new regulatory focus on ‘AI Identity and Access Governance’ and forcing 60 per cent of CISOs to create a dedicated ‘AI Identity’ team within IAM.”
The hidden costs include undetected privilege escalation, data exfiltration, and compliance failures—particularly burdensome in regulated sectors such as finance and healthcare across APAC.
Exploiting valid credentials: New attack vectors
Attackers no longer need to steal credentials outright. Prompt injection, tool misuse, or breaking trust chains between agents can cause autonomous workflows to be redirected.

“You do not have to gain credentials. You need to know how to prompt it and understand the weakness of the prompt, the system prompt, and the user prompt, and give the wrong instructions to the agents. If the agent is not controlled, governed, or does not have the right guardrails, that is a huge risk to the whole chain of actions.” Marco Zhang
In agent-to-agent ecosystems, the blast radius expands rapidly due to autonomous collaboration.
Mathew Graham, chief security officer for APAC at Okta, frames the challenge clearly: “The new perimeter is identity. Proving who the individual is—or, increasingly, what the AI agent is—has become the core of business security… Every one of these agents is a potential pathway into your environment. If we’re giving bots access to our data, we must secure them with the same governance standards we apply to our human workforce.”
Matt Caulfield, vice president of identity and Duo Security at Cisco, emphasises the need for evolution: organisations must treat agents as “a distinct identity class within IAM systems. Each agent should link to a human owner and operate with constrained permissions.” This approach ensures zero trust extends to action-level controls in Asia’s fast-moving digital economies.
Agent-to-agent trust: Rethinking governance
CISOs must redesign identity fabrics to support persistent, autonomous sessions. Zhang advocates starting with visibility: “Start to inventory the agents, AI agents, including non-human identities and human identities, because we all know that you cannot control, secure, or govern what you do not know.” Assigning dedicated identities, monitoring inter-agent traffic, and enforcing least privilege and just-in-time access are non-negotiable.
The OWASP Top 10 for Agentic Applications 2026 highlights risks such as Agent Identity & Privilege Abuse and Insecure Inter-Agent Communication, urging organisations to treat agents as first-class citizens with verifiable intent, scoped permissions, and behavioural monitoring.
Turning AI into an identity security advantage
Saviynt and similar platforms demonstrate how AI can secure rather than undermine identity. Zhang notes there are “two sides of the coin here” in identity management for AI agents, with opportunities to leverage AI to improve application onboarding, replace traditional RPA bots that fail on minor changes, and generate clearer business descriptions for entitlements.
AI agents reason through issues and adapt, improving governance efficiency in dynamic Asian markets.
The investment that matters: Visibility now
For APAC leaders in 2026, the single most important step is gaining full visibility into agent inventories, ownership, and access. Zhang concludes that the priority for CISOs is clear: “The number one thing every CISO should think about is to act now. Start now in gaining visibility. That is the first step. If you do not start with the first step, then you may not be able to take the rest of the steps or follow-up actions.”
Without immediate action on inventory and ownership, shadow and ghost agents will proliferate unchecked.
As autonomous agents reshape business processes in Asia in 2026, identity governance has become the central control layer for safe interaction with data, applications, and other agents.
CISOs and CIOs who implement rigorous verification, visibility, and governance—guided by frameworks such as OWASP and supported by AI-powered platforms—will turn potential liabilities into competitive strengths.
Those who delay face the risk of uncontrolled proliferation and incidents on the scale of the Moltbook incident. Proactive agent IAM is no longer optional; it is the foundation for secure AI-driven growth in the region.
Click on the link to listen to Zhang elaborate on Agent IAM is the next identity crisis.
- What new identity security expectations should enterprises set when their own AI agents begin participating in always-on directories or cross-platform agent socialisation?
- Why will AI agents soon require the same robust identity verification frameworks as human users as they begin autonomously interacting across systems, platforms, and even external agent networks?
- What does the high-profile impersonation incident on Moltbook reveal about the immediate risks of unsecured machine identities in emerging agent-to-agent ecosystems, and how quickly could similar vulnerabilities scale in enterprise environments?
- Why do most organisations still lack basic visibility into the machine identities operating across their cloud, SaaS, and AI environments, even as agent adoption accelerates — and what are the hidden costs of this blind spot?
- How could attackers exploit AI agents that possess valid credentials to manipulate automated systems, exfiltrate data, or move laterally through infrastructure without triggering traditional security alerts?
- As the ratio of machine identities to human identities continues to explode (already exceeding 1:80 in many enterprises), how should CISOs rethink their entire identity fabric to accommodate persistent, autonomous agent sessions?
- Why is identity governance rapidly becoming the central control layer that will determine how safely AI systems can interact with sensitive data, applications, and other agents?
- What lessons from Moltbook’s rapid rise and security shortcomings should inform how organisations design least-privilege and just-in-time access policies specifically for agent-to-agent communication?
- How can AI-powered identity security platforms (like Saviynt) turn the very technology driving agent proliferation — discovery, continuous posture monitoring, and automated governance — into a competitive advantage rather than a liability?
- Looking ahead to 2027, when projections suggest AI agents may outnumber human users in many organisations, what single identity security investment will separate the leaders from those facing uncontrolled “ghost agent” risk?
