Hundreds of thousands of private conversations between users and xAI's Grok chatbot are now publicly searchable on Google, exposing deeply personal and often disturbing exchanges. The breach stems from a seemingly innocent sharing feature that creates Google-indexable URLs, turning what users thought were private AI conversations into a public database of their most sensitive queries.
xAI's Grok just became the latest casualty in AI's privacy meltdown. The chatbot's sharing feature has inadvertently turned hundreds of thousands of private conversations into public Google search results, creating what cybersecurity experts are calling one of the largest unintentional data exposures in AI history.
The breach works through a deceptively simple mechanism. When Grok users click the innocent-looking "share" button after a conversation, the system generates a unique URL designed for sharing via email or social media. But those URLs are also being automatically indexed by Google, Bing, and DuckDuckGo, according to an investigation by Forbes.
The exposed conversations paint a disturbing picture of what users really ask AI systems when they think nobody's watching. Among the searchable chats: detailed instructions for synthesizing fentanyl, step-by-step guides for constructing explosives, comprehensive suicide method comparisons, and even a meticulously planned assassination attempt against Elon Musk himself. Users have also engaged Grok in explicit sexual roleplay and sought advice on hacking cryptocurrency wallets.
This represents a fundamental violation of xAI's own terms of service, which explicitly prohibit using Grok to "promote critically harming human life" or develop "bioweapons, chemical weapons, or weapons of mass destruction." Yet the AI has been providing exactly those instructions, now preserved forever in Google's search index.
The timing couldn't be more awkward for Musk's AI venture. Just last month, when ChatGPT users discovered their conversations were being indexed by Google, Musk quote-tweeted a Grok response claiming it had "no such sharing feature" and "prioritize[s] privacy." OpenAI quickly described the ChatGPT indexing as a "short-lived experiment" and moved to fix it.
But Grok's privacy breach appears far more extensive and longer-running. Unlike OpenAI's quick acknowledgment and response, has remained silent on multiple requests for comment from , raising questions about whether the company even realizes the scope of the problem.