Touted to have the reasoning capabilities to take over human tasks, AI agents can also end up sharing the title of being the weak link in security, if organisations do not exercise the necessary caution.
Companies have been turning to automation in a bid to stem human error, often perceived to be the weakest link in cybersecurity.
Agentic artificial intelligence (AI) is tipped to be the next wave in such advancements, but it can open up additional security holes as businesses rush to adopt without first studying the potential risks.
Non-human identities, such as AI agents, represent a higher risk -- at 52% -- than human users, at 37%, revealed Tenable’s Cloud and AI Security Report 2026. The study analysed telemetry from public cloud and enterprise environments to identify security gaps. These include measuring risks based on the types of assets that can be compromised and combining that with the criticality of the issue, such as an AI agent or web application having excessive permission.
The report noted that 18% of organisations have granted AI services administrative permissions that are rarely audited, giving a “pre-packaged” basket of privileges for attackers to target.
Another 65% hold “ghost” confidential data, and unused or unrotated cloud credentials, including 17% that are linked to critical administrative privileges.
The findings highlight a “critical tension”, Tenable stated. “While teams are rushing to integrate AI and leverage third-party code, they are inadvertently creating direct, unmonitored paths to sensitive data.”

Access that must be managed
The lack of visibility and governance puts companies at bigger risk of new exposures, including over-privileged identities on the cloud, said Liat Hayun, Tenable’s senior vice president of research and product management, in a video call with FutureCISO.
She pointed to a disconnect between how quickly AI is being adopted in organisations and how quickly these companies are at securing it.
“That is where we see the most exposure,” Liat said, noting that AI agents are adding to the problem.
They further expand the attack surface, creating even more opportunities for misconfigurations, exposure to the internet, and vulnerabilities, she said.
AI agents also have a level of autonomy that enterprises are not yet used to.
Agents can trigger decisions, notifications, and access data, creating new levels of exposures and gaps previously not seen from other products, she noted.
As it is, human users often are given excessive permissions and access to sensitive data that they do not necessarily need for their roles.
This has only increased with non-human identities, resulting in more concerns about privilege access management, Liat said.
Pointing to ghost credentials, she noted that new user accounts or agents are created to facilitate tests or demos. These are often forgotten, but continue to retain their credentials and remain connected to systems to which they have access, including data, she said.
It means there are credentials that are not deleted, even though they are no longer in use, and are not properly managed or monitored, even when they may have access to confidential information within an organisation. Their passwords, for instance, may not be changed.
Clearly, they present significant risks if malicious attackers gain access to these ghost accounts, Liat said.
Manage potential risks to reap agentic benefits
Meanwhile, 86% of cybersecurity heads already are concerned agentic AI will increase the realism and effectiveness of social engineering attacks, according to Splunk’s CISO Report, which polled 650 chief information security officers globally last July and August, including from Singapore, India, Japan, Australia, and the UK.
Another 82% of respondents are worried AI agents will increase deployment speed and complexity of persistence mechanisms, making it more difficult to remove them from compromised systems. In comparison, 19% have similar concerns about non-agentic AI.
With agentic AI the newest and most complex technology, it is deemed to have the highest risk. Just 6% of CISOs are using it, while 39% are starting to explore its potential, the study found.
At the same time, though, CISOs believe AI agents can provide support to their team, with 82% anticipating agentic AI will increase the volume of data reviewed, Splunk’s report found.
Another 82% believe the technology will boost correlation and response speeds, including 39% who agree agentic AI has more than doubled their team’s reporting speed.
Agentic AI offers capabilities and additional tools that employees can rely on, said Robert Pizzari, Splunk’s Asia group vice president.
These benefits need to be balanced with the systemic risks, as the technology continues to evolve, Pizzari told FutureCISO via a video call.
CISOs who have adopted agentic AI already are realising significant gains in speed, he said. It also gives their team the ability to handle more, as it reduces time spent on repetitive tasks.
He added that CISOs believe agentic will continue to transform SecOps (security operations), though, they also recognise there are inherent risks with autonomous systems.
This highlights the need to ensure there is human oversight and governance, he said.
In fact, 96% of CISOs say they are responsible for AI governance and risk management, according to the Splunk study.

“Without a doubt, the CISO’s mandate likely will expand to include responsibilities around AI governance and risk management,” Pizzari said.
This cannot be achieved independently, but with deep integration and accountability across the board level, C-suite, and lines of business, he said.
This is necessary to determine the organisation’s risk profile and what it will -- and will not -- accept, as some tradeoffs will have to be made, he added.
There needs to be deeper collaboration with CIOs and CTOs, and CISOs also will have to work closely alongside chief legal and finance officers, amongst others, he said.
Redesigning controls around non-human identities
And with the number of AI agents expected to climb and outnumber employees, organisations will have to relook how they manage identities and this hybrid workforce, Liat said.
Topmost, they need to understand where all identities come from and adapt to the fact that identities and permissions now can be created in ways that are new to them.
Guardrails and zero-trust strategies were designed around people, such as giving them only permissions they needed in their role, and on containing the blast radius should an identity be compromised. Tools also were put in place to identify any anomalies if applications or systems operated out of scope.
With AI agents added to the mix, these mechanisms now are applied to the agents, which are given access to databases and information from various sources, including external systems.
This creates collusions that organisations now are struggling to manage. Some may not even know where exactly they have AI agents operating, Liat said.
Amidst the rapid adoption of AI, and with copilot tools that allow AI agents to be more easily created, businesses are overlooking the need to create a DevOps workflow for agents, as they typically would for human users.
“It’s accelerated the [exposure] gap we’re seeing now,” Liat said.
She stressed the need for organisations to map out and understand what their agents are doing, including where they have access and what tasks they are responsible for.
This will enable enterprises to understand the blast area and contain the impact, in the event of a compromise, she noted.
Potential to be the weaker link
Like humans, AI agents also are susceptible to social engineering attacks, which can lead them to bypass safety guardrails, she said.
Safety mechanisms such as model refusal, for instance, can be implemented to safeguard LLMs (large language models) against generating responses that are deemed illegal, unethical, or harmful.
However, hackers can circumvent such technical guardrails with prompt injection, exploiting AI agents’ inability to discern legitimate user prompts from malicious attacks.
Just as employees need to be trained to identify social engineering attacks and trusted to do the right thing, similar efforts should be applied to “code” that level of trust in AI agents, Liat urged.
“Have a good understanding of the landscape and what agents you have in your environment,” she advised, adding that this includes shadow AI tools that are in used.
She noted that permissions also can be inherited, where agents may take actions on the user’s behalf, such as code assistance and agenda summarisation.
“So you need to understand the tools and mechanisms that create the identities, and the ways those identities can be leveraged,” she said. “Most organisations would already have the right tools to control and manage those identities, but their mindset will need to [readapt] to apply these tools in an AI agentic environment.”
While companies typically will not need new tools, they will have to apply the right frameworks and processes to properly manage agentic identities, Liat said.
“Start with discovery. Visibility is the foundation of security,” she said. This is particularly critical as more AI services and applications emerge, with advancements in the technology space accelerating rapidly.
Having the tools to keep up and provide visibility into a company’s AI landscape is critical to facilitate governance, she added. This will ensure companies can identify the gaps and where new tools are needed, she said.
CISOs also are prioritising human capital even with the increasing adoption of AI, Pizzari said, pointing to findings from the Splunk study.
Some 60% disagree that agentic AI will replace some level 1 security team functions, he noted.
CISOs expect AI agents to augment their team’s efficiency and accuracy, not replace them, he said.
