• About
  • Subscribe
  • Contact
Wednesday, February 11, 2026
  • Login
FutureCISO
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
No Result
View All Result
FutureCISO
No Result
View All Result
Home Artificial Intelligence

Organisations must go deeper as AI, cybercriminals increase collaboration

Eileen Yu by Eileen Yu
February 11, 2026
Photo by Pixabay: https://www.pexels.com/photo/deadlock-with-key-on-hole-279810/

Photo by Pixabay: https://www.pexels.com/photo/deadlock-with-key-on-hole-279810/

Share on FacebookShare on Twitter

Businesses will have to deal with additional risks and face malicious hackers who are increasingly cooperative, as both sides turn to artificial intelligence (AI) for impact.

There is a rich and collaborative underground environment, where cybercriminals alongside freelancers and lone rangers are all collaborating to enhance their attacks or work towards a common goal, said Teo Xiang Zheng, vice president of advisory at Ensign InfoSecurity.

Such activities were observed in 2024 and continued on into 2025. It is expected to accelerate in 2026, Teo said in a video call with FutureCISO.

It forms a dangerous scaffolding that facilitates state-sponsored actors with sophisticated missions and requirements, he said.

These groups can componentise their planned missions, and outsource or conscript smaller tasks to other lower-tier hackers, including brokers and ransomware-as-a-service attackers. Once completed, the different components are reconstituted, enabling state actors to launch sophisticated attacks.

Teo Xiang Zheng

In addition, the scale of reconnaissance has been amplified with AI and greater automation, Teo said. This allows malicious hackers to net not only a larger group of targets, but also deepen their research to uncover higher levels of vulnerabilities that can be exploited.

“All the recon helps them ascertain the highest success rate that they can achieve,” he said, adding that cybercriminals are further leveraging AI and automation to carry out general profiling of victims. For example, how much insurance victims have and how much money can be generated from them.

“Because AI and automation allow them to do so many things with limited resource and time, they’re in a better-informed state before launching attacks,” he said.

Amidst this threat landscape, organisations still are not doing a good enough job defending their infrastructures, he noted.

Digital supply chains also are becoming more complex and influenced by geopolitical and trade tensions, which further exacerbates matters.

As companies tweak their supply chains in response to the volatile geoeconomic market, some things are bound to fall through the cracks and new risks can surface in the transition, Teo said.

Humans carry risks, as do AI agents

Organisations also are rethinking their cybersecurity strategy as they adopt AI and build applications on LLMs (large language models).

There are ongoing conversations around security and whether existing solutions can allow companies to support and adopt AI safely, said Jennifer Cheng, Asia-Pacific Japan director of cybersecurity strategy at Proofpoint.

There also has been general consensus that humans are the primary source of risks, with users usually the primary target in attacks such as phishing.

Related:  CrowdStrike extends Falcon to protect AI interactions

Some of these risks will be transferred to AI agents, as these systems take on tasks and decisions traditionally done by humans, Cheng said in a video call with FutureCISO.

“We’re moving into an era where AI agents and humans work together, so [organisations] have to understand how to [work] that into their security,” she said.

Helpdesk remote access, for example, is commonly used to allow IT administrators to take control of a user’s screen and help troubleshoot. If such interactions are intercepted or initiated by a malicious hacker, human users may have some sensitivity and suspect something fishy is taking place, and cut the interaction.

Jennifer Cheng

However, in future, an AI agent, rather than a human, likely may be the first point of interaction and the former may not be as discerning. The AI agent will simply process the information and execute the task it is programmed to carry out, whether or not this contains a malicious threat.

“So the human and social engineering risks become prompt engineering risks,” Cheng said, adding that the scale of such attacks will increase significantly as agentic AI adoption grows.

Organisations will have to figure out how to deal with this future threat landscape, including determining what onboarding an agentic employee should entail, she said.  

There is no industry standard, yet, on what it means to secure an AI agent or the data transacted with an AI agent, she noted.

Securing with training, rules for all

Businesses will need to figure out how to protect their AI agents and workloads from attacks, whilst worrying about how attackers are using AI.

This should start with governance and ensuring there is a governance framework in place, said Steve Ledzian, Asia-Pacific Japan CTO of Google Cloud Security and Mandiant.

They also need the technical controls to make sure their AI workloads are protected against attacks, such as prompt injection, Ledzian told FutureCISO.

AI agents can be social engineered, much like humans, he said, adding that guardrails and security measures must be implemented to mitigate such risks.

These include training AI agents and looking out for things that are manipulative, he said.

Identity management tools also should apply to AI agents, Teo said.

Related:  Ransomware crisis: 59% of Asia/Pacific enterprises hit in 2023

Identity is at the core when humans perform tasks, with the respective controls implemented to provide access according to what the role or task requires -- whether it is session-based or with specific privilege authorisation.

There should be similar controls for non-human identity, specifically AI agents, he said, noting that this often is overlooked by businesses.

“They let these agents have any access and do anything they want,” he said. “They understand the zero-trust concept when it comes to things [that are tangible], but forget all of it when it comes to AI.”

The risks can be especially significant as AI can perform at machine speed, so there is scale as well as speed in an attack, if the necessary security measures are not in place, he cautioned.

Steve Ledzian

Teo further noted that enterprises are so anxious about not missing out on the AI bandwagon, and eager to exploit the technology for functional and productivity gains, that security ends up an afterthought.

In addition, their cybersecurity heads may not be trained to integrate AI ethics and responsibility, which require the expertise of legal or HR specialists, he said.

When companies rush to adopt AI, there will inevitably be slipups, resulting in data leaks and systems getting shut down, he added.

Organisations need to take the traditional lessons they learnt and apply them to AI and agentic AI, Ledzian said.

Identity management is critical, he said, echoing Teo’s comments. Companies have to determine what permissions each agent should have and how to transform their AI governance framework, he added.

“Identity has always been a core issue around security, so the question now is how to map this to an agent or agentic use case,” he said.

He pointed to industry groups such as the Coalition for Secure AI, that offer best practices and resources on securing AI deployments, including agentic AI.

Security controls help and having cyber insurance can reduce risks for companies, but nothing is a silver bullet, Ledzian said.

“So it’s important for organisations to think through how they will respond [in an attack], the decisions that need to be made, and the shared response across different departments, including legal,” he said.

He underscored the importance of carrying out tabletop exercises, which should include non-technical business decision-making in the event of a breach.

Tags: Artificial Intelligencecybersecuritygenerative AI
Eileen Yu

Eileen Yu

Eileen is currently an independent tech journalist and content specialist, providing analysis of key market developments across the Asian region and helping enterprises craft their communications plan. She also moderates panel discussions and roundtables, as well as provides media training to help senior executives better manage press interviews. Eileen has worked with corporate clients in markets, such as cybersecurity and enterprise software, and non-tech including financial services and logistics. She also has planned high-level panel and roundtable discussions and has been an invited speaker on online media. On CXOCIETY, she contributes articles across the four CXOCIETY brands -- FutureCIO, FutureCISO, FutureIoT, and FutureCFO -- covering key industry developments impacting the Asia-Pacific region, including cybersecurity, AI, data management, governance, workforce modernisation, and supply chain. Eileen has more than 25 years of industry experience at established media platforms, including ZDNET in Singapore, where she led the tech site's Asian editorial team and blogger network. Before her stint at ZDNET, she was assistant editor at Computer Times for Singapore Press Holdings and deputy editor of Computerworld Singapore. With her extensive industry experience, Eileen has navigated discussions on key trending topics including cybersecurity, artificial intelligence, quantum computing, edge/cloud computing, and regulatory policies. Eileen trained under the Journalism department at The University of Queensland, Australia. There, she earned a Bachelor of Arts (Honours) degree in Journalism, with a thesis titled, To Censor or Not: The Great Singapore Dilemma.

No Result
View All Result

Recent Posts

  • Organisations must go deeper as AI, cybercriminals increase collaboration
  • PodChats for FutureCISO: From Bias to Boardroom
  • Commvault Geo Shield empowers confident cloud adoption
  • Singapore sees 17% increase in cyber threats as global attacks soar
  • Cohesity unveils advanced identity threat detection capabilities

Categories

  • Artificial Intelligence
  • Blogs
  • CISO
  • CISO strategies
  • Cloud, Platforms and Ecosystems
  • Cloud, Virtualization, Operating Environments and Middleware
  • Compliance and Governance
  • Compliance and Governance
  • Compliance and Governance|People
  • Compliance and Governance|Technology
  • Computer, Storage, Networks, Connectivity
  • Culture and Behaviour
  • Culture and Behaviour|People
  • Cyber risk management
  • Cyber risk management
  • Cyberattacks and data breaches
  • Cybersecurity careers
  • Cybersecurity careers
  • Cybersecurity operations
  • Cybersecurity operations
  • Data Protection
  • Data Protection
  • Endpoint Security
  • FutureCISO
  • Governance, Risk and Compliance
  • Governance, Standards and Regulations
  • Incident Response
  • Network Security
  • People
  • Process
  • Remote work
  • Resources
  • Risk Management
  • Risk Management
  • Security
  • Technology
  • Training and awarenes
  • Videos
  • Vulnerabilities and threats
  • Vulnerabilities and threats
  • Webcasts/Podcasts
  • Webinars and PodChats
  • White Papers

Strategic Insights for Chief Information Officers

FutureCISO serves the interests of the Chief Information Security Officer (CISO) and the information security profession. Its purpose is to provide relevant and timely industry insights around all things important to security professionals and organisations that recognize and value the importance of protecting the organisation’s data and its customers’ privacy.

Cxociety Media Brands

  • FutureIoT
  • FutureCFO
  • FutureCIO

Categories

  • Privacy Policy
  • Terms of Use
  • Cookie Policy

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl

Login to your account below

or

[wpli_login_link]

Not a member yet? Register here

Forgotten Password?

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
  • Login

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl