The rapid advancement of artificial intelligence in software development presents both unprecedented opportunities and significant security challenges.
As AI coding assistants become increasingly sophisticated, designing, coding, and delivering production software autonomously, organisations are facing a new class of application risk that emerges at an accelerated pace and scale.
To address this evolving landscape, Black Duck has introduced Black Duck Signal, an agentic AI application security solution designed specifically to secure AI-generated code within these autonomous development workflows.
Signal represents a novel approach to application security, integrating agentic AI with over two decades of human-curated security intelligence. The solution employs a coordinated system of specialised AI security agents that leverage Black Duck's ContextAI model.
This model, built on petabytes of human-validated security data, provides the deep, real-world context necessary for accurate risk assessment and remediation, a capability that solutions relying solely on general-purpose AI cannot match.
"AI is no longer just accelerating development—it's actively authoring software," stated Jason Schmitt, CEO of Black Duck. "Signal unlocks AI-driven development by removing risk and bringing intelligence, determinism and governance to that reality."
This new model is designed to integrate seamlessly into modern agentic software development lifecycles, supporting AI coding assistants, IDEs, and automated AI pipelines through model context protocol (MCP) and APIs.
Signal continuously analyses code across various languages, frameworks, and architectures, identifying security defects early and intelligently collaborating with AI coding assistants to resolve issues with minimal developer intervention.
Traditional application security testing (AST) tools often struggle to keep pace with the speed and scale of AI-driven development. Black Duck Signal is engineered to overcome these limitations by offering AI-native security that can intelligently assess risk, validate findings, and automate remediation at machine speed.
The agentic AI architecture of Signal goes beyond single-model analysis, utilising multiple specialised agents and models that work collaboratively to analyse vulnerabilities, validate exploitability, prioritise risk, and recommend or apply fixes with human-like logic.
This approach enables Signal to tackle high-impact and complex vulnerabilities, including business logic errors and issues in less commonly supported languages, by employing a range of analysis techniques that accurately match code artefacts with real-time security context.
Black Duck Signal also acts as a crucial governance tool, enabling enterprises to manage AI-generated software responsibly and at scale. This capability is vital for organisations aiming to leverage the full potential of AI while maintaining security, compliance, and trust throughout the application lifecycle.
By providing a robust framework for securing AI-generated code, Signal empowers businesses to accelerate their adoption of AI technologies with confidence.
