At the recently concluded 6th Annual FutureCFO Conference in the Philippines, CFOs raised concerns about the cybersecurity exposure their organisations face as they embed artificial intelligence (AI) into workflows – not just across the finance function but throughout the enterprise.
As enterprises across Asia accelerate the integration of artificial intelligence into operational workflows, a critical tension has emerged. The same technology that promises unprecedented productivity is also introducing a layer of security complexity that traditional governance models were never designed to handle.
For CISOs in fast-moving economies like Singapore, India, and Indonesia, the challenge is no longer whether to adopt AI, but how to secure it without stifling innovation. In 2026, this requires a fundamental shift in strategy—from reactive risk management to proactive, human-centric governance.
Innovation outpacing oversight
The rapid integration of AI into enterprise workflows is creating a dual-edged security landscape. According to Harman Kaur, senior vice president of technology strategy and AI at Tanium, AI is now simultaneously weaponised to execute sophisticated cyberattacks and deployed as a defensive tool to prevent them. However, the pace of innovation is outstripping governance.

"Currently, the pace of AI innovation is fast outpacing security governance. Governance is being treated as an afterthought rather than a foundational requirement." Harman Kaur
This trend is amplified by boardroom pressure. Kaur notes that corporate boards are increasingly eager to showcase AI initiatives, often prioritising rapid adoption over pausing to address potential data leakage or security vulnerabilities. The result is an environment where organisations are "prioritising innovation and productivity first and only planning to catch up with applying the necessary security controls and governance frameworks as an afterthought."
This sentiment echoes findings from Splunk's From Risk to Resilience in the AI Era report, which revealed that 78% of global CISOs rank data leaks as their top AI concern, while shadow AI—unsanctioned tools deployed without governance—is a top-three worry for 90% of generative AI users.
For CISOs in Asia, where high-mobility workforces and hybrid cloud environments are the norm, these risks are magnified.
Detecting the invisible
In Southeast Asia's high-mobility environments, where employees frequently switch between mobile devices, laptops, and remote workstations, detecting unsanctioned AI agents requires a new approach to endpoint visibility.
Traditional endpoint monitoring platforms were not inherently designed to detect AI usage. However, Kaur suggests that real-time endpoint management tools can be effectively configured to fill this gap. These tools operate by continuously gathering and analysing specific signals directly from devices.
"Organisations can write custom indicators to flag the presence of certain files or running programs, or persistent memory activity that strongly indicate AI usage," says Kaur. "For instance, these signals can alert security teams when users install and run local models on corporate hardware."
By processing these signals in real time, enterprises gain immediate, actionable visibility into unauthorised AI activities. This allows security teams to monitor, detect, and potentially block unsanctioned AI access in line with internal policies—before corporate data is compromised.
Borrowing human permissions
One of the most significant risks posed by agentic AI—autonomous agents capable of executing tasks without continuous human input—is the danger of over-permissioned identities. Granting an AI its own standalone identity could lead to catastrophic consequences if that identity is compromised or behaves unpredictably.
Kaur is unequivocal that the industry is not yet ready for this scenario. "Currently, the cybersecurity industry is not ready to support fully independent AI agents operating with their own distinct identities across enterprise environments," she states.
Instead, she advocates a pragmatic IAM strategy: tie the AI's permissions directly to the human user who configures the workflow. Rather than provisioning a standalone identity, the agent inherits the exact access privileges of its human operator.
"This approach inherently enforces least-privilege access, ensuring the AI can only execute tasks and access data that the human is already authorised to handle," Kaur explains. "Furthermore, this relies heavily on human-in-the-loop oversight, keeping the human strictly accountable for the agent's decisions."
This perspective aligns with broader industry trends. The Splunk report notes that 60% of CISOs disagree with the notion that agentic AI will replace entry-level security functions, viewing AI instead as a tool for augmentation and collaboration.
Baking transparency into the AI Bill of Materials
Uncovering hidden AI deployments across hybrid clouds requires a structured approach to transparency. Developing a robust AI Bill of Materials (AI BOM) begins with foundational discovery.
"Organisations must continuously scan their environments to identify AI configurations, unauthorised AI servers, and local models running on corporate devices," says Kaur.
She highlights a common blind spot: employees frequently use third-party applications without realising that the underlying AI models powering them are from untrusted sources. This inadvertently introduces hidden risks into the enterprise.
To address this, Kaur urges AI vendors to facilitate transparency by publishing dedicated resource pages that explicitly detail the tools, models, and data-handling practices used to build their offerings. "This gives enterprises the vital visibility required to map dependencies and trust their technology stack," she adds.
For CISOs managing complex hybrid cloud environments across Asia, this level of transparency is essential to maintaining visibility and control.
Research from MIT Technology Review Insights reinforces this imperative. In their report Bridging the Operational AI Gap, they found that 90% of surveyed executives say successful AI implementations use two or more data sources, while organisations with enterprise-wide integration platforms are five times more likely to employ five or more data sources in AI workflows.
Without a clear inventory of where AI models reside and what data they access, CISOs cannot hope to govern them effectively.
Kill switches and critical decision-making
Even with robust governance, the autonomous nature of agentic AI demands fail-safe mechanisms. The most effective human-in-the-loop controls are those integrated directly into the AI platform's operational design.
Kaur emphasises two critical components. First, AI systems must be configured to explicitly ask for human approval before executing critical changes or accessing unauthorised data. However, because human operators frequently suffer from alert fatigue, the interface must be designed to highlight critical decisions visually.
"This forces the operator to pause and acknowledge that they are about to impact live production environments," she explains.
Second, organisations must implement real-time kill switches. As an AI-initiated change executes, security operators require continuous visibility and the immediate ability to stop or reverse the action if the AI behaves incorrectly.
The Splunk report reinforces this need for oversight, noting that 83% of CISOs rank hallucination impacts—such as missed alerts or false positives—as their top concern for agentic AI, alongside the ethical and legal ambiguity of autonomous decision-making.
Paula Melo, vice president of operational excellence at fraud prevention platform Feedzai, echoes this sentiment in the MIT Technology Review Insights report: "We know the models are probabilistic, but we want our outcomes to be deterministic and reliable."
"We need to build trust that the models are ready to deliver on what is expected. So, we always add the human in the loop, not only as a temporary training wheel, but as a permanent architect of that trust." Paula Melo
Proactive, AI-driven incident response
Traditional incident response playbooks are ill-suited to the speed and complexity of agentic AI compromises. Kaur argues for a fundamental shift in approach.
"Instead of relying on traditional, reactive playbooks where security teams act as a 'last line of defence' meticulously conducting post-breach forensic analysis, organisations must pivot toward proactive, real-time remediation," she says.
Future incident response strategies, she envisions, will leverage AI to process telemetry and behavioural signals directly from endpoints continuously. The goal is to empower AI to automatically identify misconfigurations, anomalies, and active threats, enabling the system to fix security gaps instantly.
"Therefore, rather than waiting to perform manual session reconstruction after a compromise occurs, modern security playbooks should prioritise building and authorising AI-driven autonomous triage capabilities that neutralise threats before an attacker can successfully exploit them," Kaur concludes.
This forward-looking approach reflects the broader theme of Splunk's report: that resilience in the AI era is not about enduring, but about advancing through data-driven strategies and human-centric leadership.
The MIT Technology Review Insights research supports this pivot, noting that 95% of surveyed executives say their company's workflows already have some level of autonomy, and 92% expect it to increase in the next 12 to 18 months.
As autonomy grows, so too must the mechanisms for real-time oversight and intervention.
