Agentic artificial intelligence (AI) is garnering interest amongst enterprises eager to apply more advanced automation into their workflow, but it also comes with added risks that require a new approach to cybersecurity.
Specifically, a zero trust framework will be critical to ensure AI agents are working for -- and not against -- the organisation within which they are deployed.
Unlike traditional AI models, which depend on predefined rules and data, agentic AI operates with a degree of autonomy to solve complex multi-step problems. AI agents understand context and can, in real-time, autonomously analyse challenges, initiate actions, and make decisions based on pre-defined goals and data insights.
However, left to run without proper security safeguards, AI agents can be led astray by malicious threat actors and introduce vulnerabilities in corporate networks.
If compromised, these agents can perform unauthorised trades and wire transfers or schedule wrong surgeries and alter medical prescriptions, said Jay Chaudhry, CEO and founder of Zscaler.
These risks need to be managed as more businesses turn to agentic AI for productivity gains and higher efficiencies, including augmenting their cybersecurity operations, according to Chaudhry, who was speaking at the security vendor’s Zenith Live conference held this week in Prague, Czech Republic.
Autonomous agents can more quickly respond to threats than humans and proactively stave off intrusion attempts, but they also can be tapped to launch targeted attacks, in real-time, wrote Nataly Kremer, chief product officer for Check Point Software Technologies, in a recent post for the World Economic Forum.

“Using agents, [AI-powered adversaries] could execute without human input and bypass traditional defences entirely. When both attackers and defenders operate at microsecond intervals, the nature of cyber conflict transforms,” Kremer said. “The line between shield and sword has never been thinner.”
She noted that securing AI agents is fundamentally more difficult than securing traditional systems, because the former do not operate on static logic. The AI systems learn and evolve, taking actions based on dynamic data inputs, she said.
AI agents carry out tasks continuously and autonomously, hence, resulting in the need to manage and protect “new populations of non-human” identities and transactions, Kremer wrote.
“The volume, velocity, and variety of this activity demands new security models built for real-time orchestration and adaptability,” she said. “Prompt injection, LLM (large language model) jailbreaking, model integrity manipulation, and unpredictable agent behaviours are fundamentally shifting how we prepare for, monitor, and detect and respond to attacks."
As it is, less than half (45%) of organisations, say their cyber resilience strategy is updated to fend off modern attacks amidst the rise of AI, according to a Zscaler study, which polled 1,700 IT decision-makers across 12 global markets, including Singapore, Japan, India, Australia, and Germany.
This is despite 49% who believe their IT infrastructure is highly resilient and 94% who describe their cyber resilience measures as effective.
The good news is, almost 80% of IT security leaders in Asia-Pacific acknowledge their security practices need transformation, as both AI adoption and cyber threats climb.
Just 57% of IT security leaders in Asia-Pacific believe their security and compliance practices are ready for the deployment of AI agents, according to Salesforce’s latest State of IT report, released this week. The study surveyed more than 2,000 IT security leaders worldwide, including 588 from eight Asia-Pacific markets, which include Singapore, Indonesia, Thailand, India, and Australia.
While all respondents from this region believe AI agents can help improve at least one security concern, 57% do not think their organisation has the proper guardrails to deploy these agents. Half are concerned their data infrastructure is not set up to generate the most value out of agentic AI, the study reveals.
And while 82% believe AI agents provide compliance opportunities, such as helping organisations better adhere to global privacy laws, the respondents believe these agents also present compliance challenges. Such concerns are largely due to a complex and still-changing global regulatory landscape and compliance processes that remain prone to error and unautomated, according to Salesforce.
While 52% believe they can deploy AI agents in compliance with regulations, 85% say they have yet to fully automate their compliance processes.
Nonetheless, 45% say they already use agents in their daily operations, the Salesforce study reveals. The cloud software vendor predicts that this figure will grow by more than half in the next couple of years, with 74% expected to use AI agents by then.
This adoption will be fuelled by various use cases, including threat detection and auditing of AI model performance.
Salesforce is stressing the importance of a strong data infrastructure as well as good governance to facilitate agentic AI initiatives.
“Organisations can only trust AI agents as much as they trust their data,” said Gavin Barfield, the software vendor’s vice president and CTO of solutions for Asean. “Robust data governance isn’t optional, but essential. IT teams that establish strong data governance frameworks will find themselves uniquely positioned to harness AI agents for their security operations, all while ensuring data protection and compliance standards are met.”

Zero trust the way forward to an agentic era
So organisations will need to transform their cybersecurity strategy for an agentic AI era, and vendors, such as Zscaler, are advocating zero trust architectures as the way forward.
Zscaler, for instance, is touting its Zero Trust Exchange platform as a way to secure connections between users, devices, and applications from anywhere, with AI agents to be added to this mix.
It is akin to treating AI agents like individual users, where they each have an identity, said Phil Tee, Zscaler’s executive vice president and head of AI innovation, in an interview with FutureCISO, held on the sidelines of the Zenith Live conference.
Various security policies including communication and data access, then can be applied to these identities as the AI agents move across the corporate network, passing off one task to another, Tee explained.
These operate the same way they would in a zero trust infrastructure, he said, pointing to Zscaler’s Zero Trust Exchange, which serves as a “switchboard” to determine who -- including AI agents -- can access what based on the policies.
The platform helps minimise attack surface and prevent initial compromise, he noted. He added that Zscaler’s large data assets, fuelled by 500 trillion signals that it collects each day, provide valuable threat intel and attack patterns.
These insights help the vendor build more advanced detection methods and enhance its ability to identity more attack vectors, he said.
It also is working to integrate agentic AI capabilities into its platform and tools, and further automate work processes. This will help its customers speed up threat mitigation and response, Tee said.
Asked how companies should transform their cybersecurity approach, he stressed again the importance of a zero trust architecture.
It is not effective to simply secure the network perimeter, he said, adding that the castle-moat approach to security also is costly and complex to manage. Companies would have to constantly worry about keeping their firewalls updated and ensure their systems are configured correctly.
A zero trust architecture can reduce attack surface by shielding applications from the internet, where unauthorised users and devices cannot discover data or services they do not have permission to access, Tee said.
Microsoft, too, has pitched zero trust as the key foundation to facilitate an agentic workforce. The software vendor last month unveiled its Entra Agent ID, which applies identity and access management to AI agents.
This enables agents created within Microsoft Copilot Studio and Azure AI Foundry to automatically be assigned identities in a Microsoft Entra directory, it said in a statemebt. It added that partnerships with ServiceNow and Workday also allow automated provisioning of identities for these third-party agentic AI platforms on Microsoft Entra.
Agentic AI is gaining momentum due to its ability to marry LLMs with reasoning and drive outcomes, IDC’s group vice president of security and trust, Frank Dickson, said in the Microsoft statement.
“As we scale autonomous capabilities, identity becomes critical,” Dickson said, stressing the importance of robust authentication, access provisioning, “fine-grained" authorisation, and governance.
Good agents can bolster cybersecurity posture
Like Zscaler, tech players such as Cisco also have unveiled plans to include agentic AI capabilities in their products as well as help customers address potential risks brought about by the adoption of AI agents.
Cisco earlier this month announced said it was expanding its zero trust security solutions to include the ability to verify AI agents.
Agentic AI ushers in a fundamental shift and a new security paradigm is necessary to ensure companies can tap its potential safely, Cisco’s senior vice president and chief product officer, Raj Chopra, added in a blog post.
“The biggest hurdle to adoption will be how agents are given safe and secure access to enterprise resources…We are doing this by extending the same principles of zero trust to agentic AI,” Chopra wrote.
He also pointed to an "identity-first" approach and applying zero-trust policies to users, machines, services, and AI agents. “With this foundation, the system can continuously monitor behaviours to distinguish ‘normal’ from ‘abnormal’ in near real time, updating policies accordingly,” he added. “The future of AI is agentic, and with the right safeguards in place, it can also be secure.”
Used in cyberdefence, AI can reduce time to detect, respond, and recover from attacks, as well as help organisations stay ahead of cybercriminals, according to a May 2025 post by McKinsey. Automating lower-risk tasks with AI agents, such as routine system monitoring and compliance checks, also frees up security teams to focus on high priority threats.
The consulting firm added that “targeted automation” improves efficiency and enhances overall risk management. “In parallel, agentic AI is expected to accelerate security operations centre (SOC) automation, where AI agents could soon work alongside humans in a semi-autonomous manner to identify, think through, and dynamically execute tasks, such as alert triage, investigation, response actions, or threat research.”
The consultancy expects more than 90% of AI capabilities in cybersecurity to come from third-party providers, which would make it easier for businesses to adopt such tools as they upgrade their security applications.
It added that AI is being integrated into product offerings, such as zero trust capabilities and identity management.
A central platform to manage AI security
Check Point’s Kremer also suggested the need for an AI operating system for cybersecurity, which would provide real-time situational awareness of users, applications, data, and threats across the organisation.
This platform could operate as a virtual administrator that understands every employee’s intent, behavioural history, and risk profile. It can detect and anticipate change, acting with context and accuracy, she said.
The operating system also would need to study external threat intelligence, in additional to internal data, and be able to adapt to evolving risks and tweak defence measures based on external events, she noted.
Tee said Zscaler provides this through its Zero Trust Exchange, or what he describes as an "agentic OS”. He added that the platform works as a multi-agent framework, where users can plug in their own agents and these are centrally managed based on policies and guardrails set by their organisation.
With the agentic AI space still rapidly evolving, Tee said Zscaler will continue to add new features and tools, including support for industry protocols, such as MCP (Model Context Protocol).
An open source framework developed by Anthropic, MCP offers a standardised way for AI agents or assistants to communicate with other systems, including data repositories and business tools.
Kremer, too, pointed to emerging AI-native protocols, including MCP and A2A (agent-to-agent communication), as key first steps towards building a more robust security posture for an AI era.
“Agentic AI can learn from every attack, adapt in real-time and prevent threats before they spread,” she noted. “It has the potential to establish a new era of cyber resilience, but only if we seize this moment and shape the future of cybersecurity together.”