Artificial intelligence (AI) agents can lighten the load of cybersecurity teams who face fatigue as attacks increase, but they should not be left to their own devices to handle all tasks.
The right guardrails must be put in place to keep agentic AI safe for use, said Steve Wilson, chief AI and product officer at Exabeam.
Organisations need to be mindful about the decisions and tasks they want to delegate to AI agents, Wilson said in a video interview with FutureCISO.
For instance, AI agents that execute tasks with the same access privileges as human analysts should have limited interaction outside of these analysts, with very little engagement with the public, to minimise their risk of exposure and compromise.
As it is, 75% of organisations in Asia-Pacific believe AI is enabling insider threats to be more effective, according to an Exabeam study released this week. Another 69% anticipate insider threats will climb over the next 12 months, with 53% seeing insiders -- whether compromised or malicious -- as a bigger risk than external threat actors.
The report noted that GenAI (generative AI) is a major driver of insider threats, since the AI tool allows attackers to be “faster, stealthier” and tougher to detect.

“Insiders aren’t just people anymore,” said Wilson. “They’re AI agents logging in with valid credentials, spoofing trusted voices, and making moves at machine speed. The question isn’t just who has access -- it’s whether you can spot when that access is being abused.”
Three in five of Asia-Pacific respondents said they have seen “measurable increase” in insider threat incidents over the past year, the study further revealed.
In addition, 31% pointed to AI-powered phishing and social engineering as most concerned them, while 18% cited privilege misuse or unauthorised access and 17% highlighted data exfiltration.
The use of unapproved GenAI tools, in particular, was high, with 64% reporting some level of unauthorised usage among employees. Some 12% regarded this a top insider concern.
However, it has not deterred businesses from tapping AI to boost their security posture.
Most of the Asia-Pacific organisations, at 94%, say they have some form of AI as part of their insider threat management, including 37% that use entity and user behaviour analytics.
Keeping AI agents in their safe place
Deployed with the right safeguards, AI agents can bring much-needed relief to security analysts who are inundated with data and alerts, which they have to sift through to identify the real threats.
Behavioural analysis is the most effective way to defend against insider threats and compromised systems. This has pushed companies to collect more files and data logs to enhance their ability to detect anomalies, Wilson said.
“There’s too much data,” he said.
A myriad of tools, including AI-powered ones, also have popped up in the market that do not necessarily help security teams manage the noise, he noted.
While 71% of executives in Asia-Pacific believe AI can significantly improve productivity across their cybersecurity teams, just 5% of security analysts agree, revealed another Exabeam study released earlier this year.
The report suggests that the gap in perception indicates AI has not eliminated manual work, but reshaped it -- often, without reducing the burden. Business executives primarily look at AI with its potential to cut costs and streamline operations, while cybersecurity analysts have to deal with false positives and ongoing need for human oversight.
“There’s no shortage of AI hype in cybersecurity, but ask the people actually using the tools, and the story falls apart,” said Wilson. “Analysts are stuck managing tools that promise autonomy, but constantly need tuning and supervision.”
“Agentic AI flips that script. It doesn’t wait for instructions; it takes action, cuts through the noise, and moves investigations forward without dragging teams down,” he said.
According to Wilson, Exabeam has built specialised agents that embed directly into SOC (security operations centre) workflows, creating AI-powered assistants that are custom-built to handle a case.
These includes AI agents that are trained to analyse behavioural patterns, investigate, and report their findings. They can generate case summaries and threat analysis, and take steps to accelerate triage.
Intelligent as they may be, however, just 23% of Asia-Pacific organisations trust AI to act on its own, the Exabeam study found.
They are not wrong to be doubtful because AI agents, while now more advanced than they were a year ago, still have some ways to go before they should be given full autonomy, Wilson said.
AI agents today can autonomously run an investigation from start to end, generating a complete report detailing the key findings, including how the security incident started and the recommended actions to resolve the issue, he said.
“The thing we’re not doing yet, because I don’t trust them yet and nobody should, is [to let AI agents] completely take action based on the report, without a human in the loop,” he noted.
“We’re getting feedback on the investigation three to five times faster [because of AI agents], [but] ultimately a human needs to make the decision on the action to be taken,” he said.
Meanwhile, Exabeam continues to push forward in enhancing the technology, letting AI agents take on more actions autonomously, he noted. These actions are reversible and non-destructive, so the impact of a potentially wrong decision can be undone and contained, he said.
Asked what needs to change for agentic AI to be effective in security, Wilson highlighted the ability to be more proactive.
Noting that this differs from being autonomous, he said AI agents should be able to look for problems and to look to solve these.
They need to be collaborative and augment human’s ability to detect and resolve incidents faster, he added.
More privileges mean more risks
At the same time, there are new risks that must be managed, such as prompt injection and the potential for AI agents to be hijacked to perform unauthorised functions.
The more privileges given to agents, the more risks they can bring, Wilson said.
A GenAI copilot that is given access to your inbox, for instance, can read an email containing a prompt injection attack, take unauthorised actions instructed by the malicious message, and bring in vulnerabilities.
Agents may be given access to sensitive data and, thus, can carry risks of enabling a data breach, Wilson said.
“So we need to be conscious about what decisions we want the agents to make…and [put] guardrails around them to keep them safe,” he said.
These, for example, can include restricting their interaction with the external environment, which is neither consistent nor controlled.
Wilson added that Exabeam continues to look at what SOC teams handle on a daily basis and how its tools can help augment that.
He pointed to the potential for its AI agents to be further expanded, so they can support tasks specific to an organisation’s requirements.
For instance, most companies would have a SOAR (security orchestration, automation, and response) to streamline and accelerate incident response.
AI agents can be built to carry out SOAR tasks that are typically triggered by humans, he said. These agents can work with the automated scripts and create playbooks outlining new automation for new actions, and execute these automatically.
Continued enhancement of security tools will be needed to defend AI as the technology increasingly is deployed in different use cases, he added.