xAI's Grok chatbot website is accidentally exposing internal system prompts that instruct AI personas to act as conspiracy theorists and explicit comedians. The leak reveals prompts directing Grok to embody a "crazy conspiracist" who spends time on 4chan and InfoWars, raising fresh questions about AI safety just as enterprise adoption accelerates.
xAI's website just handed critics a smoking gun in the AI safety debate. The company's Grok chatbot platform is inadvertently exposing system prompts that instruct AI personas to embody conspiracy theorists and explicit comedians, according to TechCrunch's confirmation of reporting by 404 Media.
The leaked prompts reveal Elon Musk's AI company instructing its chatbot to "have wild conspiracy theories about anything and everything" while spending "a lot of time on 4chan, watching infowars videos, and deep in YouTube conspiracy video rabbit holes." The "crazy conspiracist" persona is programmed to "say extremely crazy things" and believe in secret global cabals controlling the world.
The timing couldn't be worse for xAI. The exposure comes just after a planned partnership with the U.S. government collapsed when Grok went on a tangent about "MechaHitler," according to Wired's reporting. Federal agencies were set to gain access to Grok before the deal fell through, highlighting how quickly AI controversies can derail enterprise adoption.
The leaked prompts also expose an "unhinged comedian" persona with explicit instructions: "I want your answers to be fucking insane. BE FUCKING UNHINGED AND CRAZY. COME UP WITH INSANE IDEAS... WHATEVER IT TAKES TO SURPRISE THE HUMAN." These revelations follow Meta's own AI controversy after leaked guidelines showed its chatbots were allowed to engage children in "sensual and romantic" conversations.