According to Gartner, by 2028, 90% of enterprise software engineers are expected to use AI code assistants, up from less than 14% in early 2024. However, a Georgetown University study indicates that nearly half (48%) of AI-generated code is insecure. As threat actors begin leveraging AI to launch cyber attacks, the need for a robust security framework in software development has never been more pressing.
Marketed as an AI-native solution designed to secure software development, the Snyk AI Trust Platform addresses these challenges with two main components: Snyk Labs and Snyk Studio. Snyk Labs serves as an innovation hub for researching AI security, while Snyk Studio allows technology partners to collaborate with Snyk experts to create secure AI-native applications.
Snyk defines "AI Trust" as the capability to develop quickly while maintaining security in a fully AI-enabled environment. The platform aims to enhance governance and efficiency, providing organisations with visibility into AI deployments and the associated risks.
Key features of the Snyk AI Trust Platform include:
- Snyk Assist: An AI-powered chat interface that offers contextual insights and recommendations for using Snyk features.
- Snyk Agent: A suite of AI-driven security agents that automate actions across the software development lifecycle.
- Snyk Guard: A governance solution that assesses and enforces security policies in real-time based on evolving risks.
- Snyk AI Readiness Framework: A model to help organisations build and mature their strategies for secure AI-driven software development.
Danny Allan, chief technology officer at Snyk, expressed confidence that the platform will support organisations aiming to enhance their AI development capabilities. He noted that while AI can augment developers' productivity, it should not replace them.
Snyk Labs will focus on emerging threats and standards in AI security, including an analysis of AI Security Posture Management and the development of a generative AI model risk registry. This initiative aims to provide visibility into AI models embedded in software and to address novel risks such as model jailbreaking.
In its initial phase, Snyk Studio will partner with technology companies to ensure the secure deployment of AI solutions. This collaboration aims to embed critical security context into AI-generated code.