Google just made its boldest move yet to weaponize AI against cybercriminals. The tech giant unveiled CodeMender, an autonomous AI agent that automatically patches critical security vulnerabilities in code, alongside a dedicated AI Vulnerability Reward Program and expanded security framework. As cyber threats grow more sophisticated, Google's betting that AI defenders can finally outpace AI-powered attackers.
Google just declared war on AI-powered cybercrime - and it's fighting fire with fire. The company unveiled a comprehensive security strategy today that positions artificial intelligence as the ultimate cyber defense weapon, headlined by CodeMender, an autonomous AI agent that can automatically patch critical vulnerabilities in code.
The timing couldn't be more critical. Cybercriminals are already weaponizing AI for faster attacks and more sophisticated social engineering campaigns, according to Google's threat intelligence team. But Google's Evan Kotsovinos, VP of Privacy, Safety & Security, believes defenders can flip the script. "AI can be a game-changing tool for cyber defense, and one that creates a new, decisive advantage for cyber defenders," he wrote in today's company blog post.
CodeMender represents the most ambitious piece of this strategy. Built on Google's Gemini models, the AI agent doesn't just identify vulnerabilities - it performs root cause analysis using advanced techniques like fuzzing and theorem provers, then autonomously generates and applies patches. What makes it truly revolutionary is its self-validation system: specialized "critique" agents act as automated peer reviewers, checking each patch for correctness and security implications before human approval.
"As we achieve more breakthroughs in AI-powered vulnerability discovery, it will become increasingly difficult for humans alone to keep up," the company explained. Google's existing AI security tools like BigSleep and OSS-Fuzz have already discovered zero-day vulnerabilities in widely-used software, creating a patching bottleneck that CodeMender aims to eliminate.
The company's also consolidating its vulnerability research efforts with a dedicated AI Vulnerability Reward Program. Google has already paid out over $430,000 for AI-related security issues across various programs, but the new unified system streamlines reporting and clarifies which AI problems qualify for bounties. The move comes as security researchers struggle with fragmented reporting processes across different AI platforms and services.