Businesses will have to deal with additional risks and face malicious hackers who are increasingly cooperative, as both sides turn to artificial intelligence (AI) for impact.
There is a rich and collaborative underground environment, where cybercriminals alongside freelancers and lone rangers are all collaborating to enhance their attacks or work towards a common goal, said Teo Xiang Zheng, vice president of advisory at Ensign InfoSecurity.
Such activities were observed in 2024 and continued on into 2025. It is expected to accelerate in 2026, Teo said in a video call with FutureCISO.
It forms a dangerous scaffolding that facilitates state-sponsored actors with sophisticated missions and requirements, he said.
These groups can componentise their planned missions, and outsource or conscript smaller tasks to other lower-tier hackers, including brokers and ransomware-as-a-service attackers. Once completed, the different components are reconstituted, enabling state actors to launch sophisticated attacks.

In addition, the scale of reconnaissance has been amplified with AI and greater automation, Teo said. This allows malicious hackers to net not only a larger group of targets, but also deepen their research to uncover higher levels of vulnerabilities that can be exploited.
“All the recon helps them ascertain the highest success rate that they can achieve,” he said, adding that cybercriminals are further leveraging AI and automation to carry out general profiling of victims. For example, how much insurance victims have and how much money can be generated from them.
“Because AI and automation allow them to do so many things with limited resource and time, they’re in a better-informed state before launching attacks,” he said.
Amidst this threat landscape, organisations still are not doing a good enough job defending their infrastructures, he noted.
Digital supply chains also are becoming more complex and influenced by geopolitical and trade tensions, which further exacerbates matters.
As companies tweak their supply chains in response to the volatile geoeconomic market, some things are bound to fall through the cracks and new risks can surface in the transition, Teo said.
Humans carry risks, as do AI agents
Organisations also are rethinking their cybersecurity strategy as they adopt AI and build applications on LLMs (large language models).
There are ongoing conversations around security and whether existing solutions can allow companies to support and adopt AI safely, said Jennifer Cheng, Asia-Pacific Japan director of cybersecurity strategy at Proofpoint.
There also has been general consensus that humans are the primary source of risks, with users usually the primary target in attacks such as phishing.
Some of these risks will be transferred to AI agents, as these systems take on tasks and decisions traditionally done by humans, Cheng said in a video call with FutureCISO.
“We’re moving into an era where AI agents and humans work together, so [organisations] have to understand how to [work] that into their security,” she said.
Helpdesk remote access, for example, is commonly used to allow IT administrators to take control of a user’s screen and help troubleshoot. If such interactions are intercepted or initiated by a malicious hacker, human users may have some sensitivity and suspect something fishy is taking place, and cut the interaction.

However, in future, an AI agent, rather than a human, likely may be the first point of interaction and the former may not be as discerning. The AI agent will simply process the information and execute the task it is programmed to carry out, whether or not this contains a malicious threat.
“So the human and social engineering risks become prompt engineering risks,” Cheng said, adding that the scale of such attacks will increase significantly as agentic AI adoption grows.
Organisations will have to figure out how to deal with this future threat landscape, including determining what onboarding an agentic employee should entail, she said.
There is no industry standard, yet, on what it means to secure an AI agent or the data transacted with an AI agent, she noted.
Securing with training, rules for all
Businesses will need to figure out how to protect their AI agents and workloads from attacks, whilst worrying about how attackers are using AI.
This should start with governance and ensuring there is a governance framework in place, said Steve Ledzian, Asia-Pacific Japan CTO of Google Cloud Security and Mandiant.
They also need the technical controls to make sure their AI workloads are protected against attacks, such as prompt injection, Ledzian told FutureCISO.
AI agents can be social engineered, much like humans, he said, adding that guardrails and security measures must be implemented to mitigate such risks.
These include training AI agents and looking out for things that are manipulative, he said.
Identity management tools also should apply to AI agents, Teo said.
Identity is at the core when humans perform tasks, with the respective controls implemented to provide access according to what the role or task requires -- whether it is session-based or with specific privilege authorisation.
There should be similar controls for non-human identity, specifically AI agents, he said, noting that this often is overlooked by businesses.
“They let these agents have any access and do anything they want,” he said. “They understand the zero-trust concept when it comes to things [that are tangible], but forget all of it when it comes to AI.”
The risks can be especially significant as AI can perform at machine speed, so there is scale as well as speed in an attack, if the necessary security measures are not in place, he cautioned.

Teo further noted that enterprises are so anxious about not missing out on the AI bandwagon, and eager to exploit the technology for functional and productivity gains, that security ends up an afterthought.
In addition, their cybersecurity heads may not be trained to integrate AI ethics and responsibility, which require the expertise of legal or HR specialists, he said.
When companies rush to adopt AI, there will inevitably be slipups, resulting in data leaks and systems getting shut down, he added.
Organisations need to take the traditional lessons they learnt and apply them to AI and agentic AI, Ledzian said.
Identity management is critical, he said, echoing Teo’s comments. Companies have to determine what permissions each agent should have and how to transform their AI governance framework, he added.
“Identity has always been a core issue around security, so the question now is how to map this to an agent or agentic use case,” he said.
He pointed to industry groups such as the Coalition for Secure AI, that offer best practices and resources on securing AI deployments, including agentic AI.
Security controls help and having cyber insurance can reduce risks for companies, but nothing is a silver bullet, Ledzian said.
“So it’s important for organisations to think through how they will respond [in an attack], the decisions that need to be made, and the shared response across different departments, including legal,” he said.
He underscored the importance of carrying out tabletop exercises, which should include non-technical business decision-making in the event of a breach.
