Google's AI Overviews feature is being weaponized by scammers who've figured out how to inject deliberately harmful information into the AI-generated search summaries that now appear at the top of billions of queries. According to a Wired investigation, the problem goes beyond the feature's well-documented tendency to hallucinate or generate nonsense - bad actors are actively gaming the system to push users toward scam sites, phishing schemes, and fraudulent products. The vulnerability reveals a critical weakness in how Google validates sources for its AI-powered search experience.
Google rolled out AI Overviews to millions of users last year, promising a faster, more intuitive search experience. But the feature that was supposed to make finding information easier is now leading people straight into traps set by scammers.
The problem isn't just that AI Overviews occasionally gets things wrong - that's been happening since launch, when the feature infamously told users to put glue on pizza and eat rocks. What's happening now is different and more dangerous. Bad actors have reverse-engineered how Google's AI sources information and they're exploiting that knowledge to plant malicious content directly into the summaries that appear before traditional search results.
According to the Wired report, these manipulated AI Overviews are directing users to phishing sites disguised as customer service portals, promoting counterfeit products as legitimate recommendations, and spreading misinformation designed to build trust before hitting victims with financial scams. The AI doesn't distinguish between authoritative sources and content farms that have been optimized specifically to trick machine learning systems.
The vulnerability stems from how large language models prioritize and synthesize information. Unlike traditional search rankings that rely heavily on established authority signals like backlinks and domain age, AI Overviews can be influenced by newer content that appears frequently across multiple low-quality sites. Scammers have figured out they can create networks of sites that parrot the same false information, essentially voting for their own lies until the AI accepts them as consensus.











