Across boardrooms and security operations rooms, a familiar pattern is emerging: organisations are adopting AI faster than that can secure it—and the gap is starting to show. Proofpoint’s 2026 AI and Human Risk Landscape report revealed that 87% of surveyed organisations have deployed AI assistants beyond pilot, while 76% are actively piloting or rolling out autonomous agents.
Source: 2026 AI and Human Risk Landscape, Proofpoint
But the confidence layer—the part that answers, “Would we know if something went wrong?”—is lagging. 52% are not fully confident their AI security controls would detect a compromised AI, and half of those with controls in place have already experienced a confirmed or suspected AI-related incident.
In practice, this means many teams are operating with controls they cannot truly verify under real-world conditions, including incidents that don’t respect the boundaries of a single tool, channel, or platform.
That limitation matters because the AI attack surface is no longer a tidy, single perimeter. Proofpoint reports that while email remains the most common threat vector (63%), exposure also extends across third-party SaaS and cloud applications (47%), social and messaging platforms (41%), and AI assistants or agents (36%).
When incidents span multiple systems, investigators face a second challenge: reconstructing the story end-to-end. Only one-third say they are fully prepared to investigate an AI-related incident affecting a single point, and 41% report difficulty correlating threats across channels.
Tool sprawl compounds the problem. 94% of organisations say managing multiple security tools is at least moderately challenging, with integration and threat correlation cited as major friction points. The result is slower detection, fragmented visibility, and delayed response—precisely when AI can propagate risk at “machine speed and scale”.
Source: 2026 AI and Human Risk Landscape, Proofpoint
Industry commentary reinforces this momentum: Proofpoint frames the issue as an adoption–readiness divide rather than a purely novel threat class, arguing that long-standing risks—untrusted code, sensitive data mishandling, credential loss—are being amplified by AI’s ability to act quickly across connected workflows.
“Organisations are scaling AI assistants and autonomous agents across core workflows, yet many cannot confirm their controls are effective or fully investigate incidents that move across collaboration channels," says Ryan Kalember, chief strategy officer at Proofpoint. "As AI becomes embedded in how work gets done, security leaders must rethink how they protect trusted interactions across people, data and AI systems.”
In the next 12 months, Proofpoint reports many organisations are already planning consolidation and broader coverage: 61% plan to expand AI protections, 56% aim to extend collaboration-channel coverage, and 53% expect to move toward a unified platform approach.
The lesson is simple: scaling AI safely requires not just deploying controls, but proving they work across the real channels where AI now lives. As Kalember concludes:
Ryan Kalember
"The answer isn't to treat AI as a novel threat category, but to apply rigorous, proven controls to what AI touches, what it runs, and what it's allowed to authenticate as. Organisations that get that foundation right early will scale AI confidently. Those that don't are just automating their own exposure."