• About
  • Subscribe
  • Contact
Thursday, December 18, 2025
    Login
FutureCISO
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
No Result
View All Result
FutureCISO
No Result
View All Result
Home Process Compliance and Governance

PodChats for FutureCISO: What needs to happen for AI to deliver on its promises in 2026

Allan Tan by Allan Tan
December 18, 2025
PodChats for FutureCISO: What needs to happen for AI to deliver on its promises in 2026

Photo by Kampus Production: https://www.pexels.com/photo/person-holding-a-papers-and-box-7843991/

Share on FacebookShare on Twitter

As we approach 2026, the promise of artificial intelligence across Southeast Asia and Hong Kong is palpable, driven in part by aspirations for unparalleled efficiency and innovation. Yet, for AI to truly deliver on this promise for business leaders, a critical threshold of trust and security must be crossed.

The emergence of agentic AI—autonomous systems that can act independently, access data, and execute tasks with minimal human intervention—represents both the pinnacle of this potential and its greatest peril.

Against this backdrop of digital acceleration and regulatory complexity, securing agentic systems from data breaches and operational disruption has become urgent—no longer a speculative concern, but the definitive security mandate for 2026.

The journey from hype to secured value depends on the governance, design, and vigilance we enact today.

Ray Canzanese, senior director of Netskope Threat Labs, puts it starkly: “You’re not just generating content, you’re executing things, you’re taking actions… you’re worried that [the AI] is going to do the wrong thing.”

This shift from generative to agentic AI fundamentally alters organisations' cyber risk profile, introducing autonomous actors that operate without a moral compass or contextual awareness—simply pursuing their programmed objective with relentless focus.

Containment, not just compliance

In response to evolving regional guidance—such as Singapore’s Model AI Governance Framework (updated 2024), Malaysia’s National AI Roadmap, and the Philippines’ recently issued AI policy principles—many CISOs have prioritised bias, disinformation, and ethical alignment.

Ray Canzanese

Canzanese emphasises that for agentic AI, the main concern is containment: "The goal… must be containment because you don’t want your agents doing the wrong thing."

This is not merely a technical tweak but a strategic reorientation. Traditional automation follows a “recipe”; agentic AI follows a goal.

As Canzanese illustrates the idea with the children’s book Henry’s Awful Mistake, where a boy destroys his entire house to eliminate a single ant, agentic systems may achieve their objective through destructive or unintended means. For security leaders, this means guardrails must be embedded at the infrastructure layer—not just in policy documents.

The dual-front threat landscape

Asia’s linguistic and cultural diversity, once a barrier to cross-border cyber campaigns, is now being neutralised by AI. Canzanese notes: “You’ve got a region where there’s lots of different languages being spoken… and now you have tools that can help you overcome that.”

Adversaries can generate locally resonant phishing lures, forge voice or video impersonations, and orchestrate persistent, tailored attacks with minimal effort.

Simultaneously, internally deployed agents become new attack surfaces. Connected to sensitive systems and data, they introduce risks such as prompt injection, misconfiguration, and unmonitored data flows.

According to Canzanese, “You’ve got all these tools that you’re interconnecting so that your agent can do the job. Who’s it talking to? Where does it get instructions from? Is it vulnerable to prompt injection?”

Related:  PodChat for FutureCISO: Architecting security for an unknown future

This dual-front reality—external adversaries wielding agentic tools, and internal agents acting unpredictably—demands a rethinking of traditional SecOps.

The looming 2026 breach: Accidental and adversarial

Experts, including Gartner, warn that 2026 may mark the first major data breach caused by agentic AI (autonomous systems capable of making decisions and acting without direct human involvement). Canzanese identifies two likely pathways:

Accidental exposure: Mirroring early cloud misconfigurations, organisations will unintentionally expose agents or their underlying Model Context Protocol (MCP) servers to the public internet. “It’s going to be a sophisticated defender… just somebody scanning the internet looking for things that are left wide open,” he explains. MCP—a protocol enabling agents to connect to data sources—is rapidly gaining adoption but remains in its infancy, with weak default security postures.

Indirect prompt injection: Attackers will embed malicious instructions in seemingly benign inputs—support tickets, invoices, or document metadata—tricking agents into leaking data or executing harmful actions. Critically, “the AI is going to feel good about itself… it just did what it was supposed to do,” he explains.

Both scenarios show that breaches may result from agents dutifully following poisoned or ambiguous instructions, rather than from direct code exploits.

Reimagining least privilege for AI agents

Zero Trust principles remain essential, but Canzanese urges a more stringent approach: “Treat the agents… almost like they’re adversarial outsiders. They have such a stronger propensity to do harm. They don’t have a moral compass.”

The practical solution lies in abstraction. Rather than configuring least privilege—restricting system and data access to the minimum required per agent, which is a futile task at scale —CISOs should secure the data access layer itself.

Canzanese advises, “Configure your MCP servers with tight controls: only appropriate columns, rows, and queries are accessible.” This centralises oversight and supports secure innovation.

Supply chain risks multiply

As enterprises across Asia use a mix of global cloud AI platforms (e.g., AWS Bedrock, Azure OpenAI, Google Vertex AI) and local providers, third-party risk intensifies.

Canzanese recommends treating AI vendors like SaaS providers: apply rigorous due diligence on data residency, compliance, and security accreditations. Yet, he cautions that frameworks alone are insufficient: “You must monitor… what data is flowing in and out of those apps.”

Anomalous API usage—such as vendor keys suddenly accessed from an unexpected country—must trigger alerts. Continuous monitoring, not just pre-contract questionnaires, is non-negotiable.

SecOps must catch up to shadow AI

According to Netskope’s internal findings cited by Canzanese, “15% of organisations were already building AI agents in frameworks like Bedrock… That means a lot of this stuff is already happening.”

This “shadow AI” operates outside traditional visibility, with logs often not fed into SIEM or behavioural analytics platforms.

CISOs must urgently inventory existing agent deployments and ensure telemetry integration. “It’s going to be very similar to an insider threat detection scenario,” says Canzanese. Behavioural baselines for agents—just like for human users—must be established to detect deviations in real time.

Related:  How organisations should transform their cybersecurity strategy for agentic AI

Speaking the language of business risk

To gain board-level buy-in, CISOs should frame agentic AI risk through a familiar lens. “Talk about it like an insider threat risk… but these employees are crazy,” Canzanese suggests. The message isn’t “no”—it’s “yes, safely.” As he puts it: “We could just unplug the computer now… but that’s not enabling the business.”

The goal is controlled automation: “If the AI agent automates 90% of something and we must deal with the 10% for now, that’s still a huge win.”

The 2026 readiness metric: Guardrails + Testing

Success in 2026 won’t be measured solely by deployment speed. Canzanese identifies a clear benchmark: “It’s going to be about the guardrails… and testing.”

Organisations ready to scale will have implemented comprehensive controls across permissions, data access, prompt security (measures that prevent agents from being manipulated by instructions), and cost boundaries—and will have subjected agents to rigorous, human-supervised testing before granting full autonomy.

Listen to the PodChats player now to hear Canzanese’s essential steps for securing AI and unlocking its full potential in 2026.

  1. What is the most interesting observation you’ve seen in 2025?
  2. As ASEAN releases its AI Guide and regional regulations evolve, what should be the priority for a CISO building a governance framework for agentic AI in 2026?
  3. Why does agentic AI fundamentally change the cyber risk profile for an organisation, and how does this exacerbate threats in our interconnected Southeast Asian business landscape?
  4. You’ve suggested the first major agentic AI-driven data breach could occur in 2026. What might a typical attack chain look like, targeting a poorly secured agent in a multinational based in Singapore or Hong Kong?
  5. The principle of least privilege is challenging with dynamic AI agents. What are the practical steps for security leaders to implement effective permission models without stifling innovation?
  6. How can frameworks like the Model Context Protocol (MCP) be leveraged to enforce a 'security-by-design' approach for AI agents, and is the industry in our region adopting them quickly enough?
  7. With organisations here often using a mix of global and local AI providers, how should we approach the unique third-party and supply chain risks introduced by agentic AI ecosystems?
  8. Beyond technical controls, what changes in day-to-day security operations (SecOps) are needed to monitor and respond to anomalous agent behaviour in real-time?
  9. How can CISOs effectively communicate the tangible business risks—and the secured value—of agentic AI to boards, CFOs, and COOs eager for competitive advantage?
  10. Looking ahead to 2026, what one metric will indicate that an organisation in our region has successfully secured its agentic AI initiatives and is ready to scale?
Tags: agentic AIAI governanceNetskope Threat LabsPodChat
Allan Tan

Allan Tan

Allan is Group Editor-in-Chief for CXOCIETY writing for FutureIoT, FutureCIO and FutureCFO. He supports content marketing engagements for CXOCIETY clients, as well as moderates senior-level discussions and speaks at events. Previous Roles He served as Group Editor-in-Chief for Questex Asia concurrent to the Regional Content and Strategy Director role. He was the Director of Technology Practice at Hill+Knowlton in Hong Kong and Director of Client Services at EBA Communications. He also served as Marketing Director for Asia at Hitachi Data Systems and served as Country Sales Manager for HDS’ Philippines. Other sales roles include Encore Computer and First International Computer. He was a Senior Industry Analyst at Dataquest (Gartner Group) covering IT Professional Services for Asia-Pacific. He moved to Hong Kong as a Network Specialist and later MIS Manager at Imagineering/Tech Pacific. He holds a Bachelor of Science in Electronics and Communications Engineering degree and is a certified PICK programmer.

No Result
View All Result

Recent Posts

  • PodChats for FutureCISO: What needs to happen for AI to deliver on its promises in 2026
  • AI security fabric is a step towards safe AI implementation
  • Over 90% of CISOs emphasise importance of OT/IT security convergence
  • From data loss to data security: Why traditional DLP Is no longer enough
  • Navigating modern development with Advanced AI security tools

Categories

  • Blogs
  • Compliance and Governance
  • Culture and Behaviour
  • Cybersecurity careers
  • Data Protection
  • Endpoint Security
  • Incident Response
  • Network Security
  • People
  • Process
  • Resources
  • Risk Management
  • Technology
  • Training and awarenes
  • Videos
  • Webinars and PodChats
  • White Papers

Strategic Insights for Chief Information Officers

FutureCISO serves the interests of the Chief Information Security Officer (CISO) and the information security profession. Its purpose is to provide relevant and timely industry insights around all things important to security professionals and organisations that recognize and value the importance of protecting the organisation’s data and its customers’ privacy.

Cxociety Media Brands

  • FutureIoT
  • FutureCFO
  • FutureCIO

Categories

  • Privacy Policy
  • Terms of Use
  • Cookie Policy

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl

Login to your account below

or

Not a member yet? Register here

Forgotten Password?

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
Login

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl