Veracode's 2025 GenAI Code Security Report reveals that 45% of development tasks analysed introduced critical flaws. This comprehensive study evaluated over 100 large language models (LLMs) across 80 curated coding tasks, underscoring a troubling trend: AI, while proficient in generating functional code, frequently opts for insecure methods.
Veracode's chief technology officer, Jens Wessling noted that the rise of “vibe coding”—where developers rely on AI to produce code without explicitly defining security requirements—poses a fundamental challenge.
Jens Wessling
“This shift leaves secure coding decisions to LLMs, which are making the wrong choices nearly half the time, and it’s not improving.” Jens Wessling
Rising threat landscape
As AI tools become more capable, they also facilitate enhanced exploitation techniques for cyber attackers.
AI-powered tools can quickly scan systems for vulnerabilities and generate exploit code with minimal human involvement, lowering the barrier for less-skilled attackers and increasing both the speed and sophistication of cyber threats.
Veracode’s research indicates that Java is the highest-risk programming language for AI code generation, with a staggering security failure rate exceeding 70%. Other languages, including Python, C#, and JavaScript, also presented significant risks, with failure rates ranging from 38% to 45%.
LLMs also failed to secure code against prevalent threats such as cross-site scripting (CWE-80) and log injection (CWE-117) in 86% and 88% of cases, respectively.
Managing application risks
For Chief Information Security Officers (CISOs), the findings signal an urgent need for comprehensive application risk management.
As AI development practices like vibe coding enhance productivity, they simultaneously amplify security risks. Veracode advocates for integrating robust security measures into development workflows to prevent vulnerabilities from reaching production.
Integrate AI-powered tools: Employ tools like Veracode Fix to remediate security risks in real-time.
Leverage Static Analysis: Use automated detection methods to identify flaws early, preventing vulnerable code from progressing through development pipelines.
Embed security in workflows: Automate policy compliance and ensure secure coding standards are enforced.
Conduct software composition analysis (SCA): Ensure AI-generated code does not introduce vulnerabilities from third-party dependencies.
Adopt bespoke AI-driven guidance: Provide developers with precise remediation instructions to enhance their coding practices.
Deploy a package firewall: Automatically detect and block malicious packages, vulnerabilities, and policy violations.
Wessling concluded, “The challenge for organisations is ensuring that security evolves alongside AI capabilities. Security cannot be an afterthought if we want to prevent the accumulation of massive security debt.”
As AI continues to transform software development, the role of the CISO in safeguarding against these emerging risks is more critical than ever.