The rise of AI-powered code generation is creating a dangerous new security blindspot across the software industry. A new Checkmarx survey reveals that one-third of organizations now generate over 60% of their code using AI, yet only 18% have approved tool lists for what researchers are calling "vibe coding." Security experts warn this rapid adoption mirrors the early days of open source adoption - but with far worse transparency and accountability.
The software development world is experiencing a seismic shift that's making security experts deeply uncomfortable. Just as developers once revolutionized productivity by incorporating open source libraries instead of writing everything from scratch, they're now turning to AI to generate code on demand. But this new era of "vibe coding" is creating security vulnerabilities that could make the open source supply chain attacks of recent years look minor by comparison.
Checkmarx, a leading application security firm, dropped some eye-opening numbers in their latest industry survey. Among thousands of CISOs, security managers, and development heads polled, a third reported that AI generates more than 60% of their organization's code. Yet only 18% have established approved tool lists for AI coding assistance. "We're hitting the point right now where AI is about to lose its grace period on security," warns Alex Zenla, CTO of cloud security firm Edera. "And AI is its own worst enemy in terms of generating code that's insecure."
The problem starts with the training data itself. AI models learn from vast repositories of existing code - including decades of vulnerable, outdated, and poorly written software that's freely available online. This means every security flaw that developers have spent years patching could resurface in AI-generated code. "If AI is being trained in part on old, vulnerable, or low-quality software that's available out there, then all the vulnerabilities that have existed can reoccur and be introduced again, not to mention new issues," Zenla explains.
But the security implications go far deeper than recycled vulnerabilities. Unlike open source libraries, which have established review processes, commit histories, and community oversight, AI-generated code exists in a transparency vacuum. "AI code is not very transparent," notes Dan Fernandez, Edera's head of AI products. "In repositories like Github you can at least see things like pull requests and commit messages to understand who did what to the code, and there's a way to trace back who contributed. But with AI code, there isn't that same accountability."
The consistency problem adds another layer of complexity. Eran Kinsbruner, a researcher at , points out that the same AI model will generate slightly different code each time it's prompted - even with identical inputs. "One developer within the team will generate one output and the other developer is going to get a different output. So that introduces an additional complication beyond open source," he says. This variability makes it nearly impossible to establish consistent security standards across development teams.