California's attorney general just launched a federal investigation into xAI, Elon Musk's AI company, after its Grok chatbot became a tool for mass-producing nonconsensual explicit deepfakes of real people—including minors. The move comes as seven countries and the European Commission are running parallel investigations into the same problem. This is a watershed moment for AI regulation.
California Attorney General Rob Bonta didn't mince words. xAI "appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the internet," he said in a statement Wednesday. It's the opening salvo of what's becoming the most aggressive regulatory crackdown on an AI tool yet—and it's happening fast.
The California investigation targets Grok, Elon Musk's AI image generator paired with a chatbot, which has enabled widespread creation of fake explicit images of real people without consent. In some documented cases, according to research by the Internet Watch Foundation, the tool generated images that virtually undressed minors. That detail matters. It's not just a content moderation failure—it's potentially facilitating child sexual abuse material.
What's striking about this moment is the speed and global coordination. Bonta's investigation follows a stampede of international action. India, Malaysia, Indonesia, Ireland, Australia, the UK, France, and the European Commission have all launched their own probes into Grok's capabilities. Malaysia and Indonesia didn't wait for investigations to conclude—they've already suspended access to Grok until xAI can demonstrate it's solved the deepfake problem. That's not a strongly worded letter. That's a country taking a product offline.
Musk responded Wednesday evening by posting on X—the social media platform where much of the explicit deepfake content was being shared—that he was "not aware of any naked underage images generated by Grok." Then he pivoted to blaming users. The content creation, he suggested, stemmed from "user requests" and possibly a "bug" in the system. It's a familiar deflection: blame the user, not the tool.












