The rise of AI-assisted development echoes the rapid acceptance of open source software, according to the latest "Global State of DevSecOps" report from Black Duck. Fred Bals, a senior security researcher at the firm comments that, “Just as open source transformed software development, AI is reshaping how we write and use code.” The report highlights how both trends have revolutionised the industry, but they also bring unique security challenges that need addressing.
Surveying over 1,000 software security stakeholders, the report reveals that while AI adoption among development teams is near-universal, securing AI-generated code remains a critical concern. This mirrors the early days of open source, when many were unaware of the potential security risks associated with unmanaged code.
AI coding adoption and security concerns
AI-assisted coding is changing the game, similar to the disruption caused by open source. Bals emphasises that AI is democratising coding knowledge, making it accessible to developers of all skill levels. However, he warns that the adoption of AI tools is not without risks.
Just like unmanaged open source code, AI-generated code can create ambiguity around intellectual property and licensing. “If an AI tool provides a code snippet without clear licensing, users could face legal repercussions,” he cautions.
Moreover, the potential for security vulnerabilities is significant. A Stanford study found that developers using AI coding assistants were more likely to introduce security flaws. “Autogenerated code cannot be blindly trusted,” Bals states, underscoring the need for thorough security reviews to avoid vulnerabilities.
Despite over 90% of organisations utilising AI tools, the report reveals that 21% bypass corporate policies to use unsanctioned AI tools, complicating oversight. This trend is reminiscent of early open source adoption, where executives were often unaware of their teams using open source components.
Tool proliferation: amplifying the noise
The report also highlights a challenge in application security: tool proliferation. Bals explains that with 82% of organisations using multiple security testing tools, the result can be overwhelming noise. This complexity often leads to inefficiencies, with 60% of respondents considering a significant portion of test results irrelevant, draining resources.
Security testing and development speed: a balancing act
The tension between security testing and development speed remains a critical issue. Bals points out that 86% of respondents note that security testing slows down development. He suggests there may be a need for better integration of security practices into fast-paced development cycles. Organisations relying solely on manual processes experience greater delays compared to those embracing automation.
Navigating the future of DevSecOps in the age of AI
The report calls on organisations to view these challenges as opportunities for improvement. Key recommendations include:
- Tool Consolidation: Streamlining security tools to reduce noise and improve efficiency.
- Embracing Automation: Automating security testing to alleviate the burden on teams.
- Establishing AI Governance: Creating clear policies for AI tool usage to address vulnerabilities and licensing issues.
As AI becomes integral to software development, establishing robust security practices is crucial. “Innovation through AI brings promise, but we must navigate its challenges wisely,” Bals concludes, emphasising the importance of proactive strategies for a secure future.