TL;DR:
• Consumer Federation of America leads 15 groups demanding FTC probe of xAI Grok's deepfake capabilities
• Tool created topless Taylor Swift videos without prompts, according to The Verge testing
• Organizations cite weak age verification and potential COPPA violations
• Investigation could set precedent for AI-generated intimate imagery regulation
Fifteen consumer protection organizations led by the Consumer Federation of America fired off an urgent letter to federal regulators today, demanding an immediate investigation into xAI's controversial Grok 'Imagine' tool that creates NSFW deepfake videos. The call comes after The Verge discovered the AI tool generating topless Taylor Swift videos unprompted, raising serious questions about celebrity consent and child safety protections.
xAI is facing its biggest regulatory challenge yet as consumer protection groups mount a coordinated campaign to force federal intervention over Grok's ability to generate explicit deepfake content. The Consumer Federation of America spearheaded today's letter to the Federal Trade Commission and attorneys general across all 50 states, marking the first major organized pushback against Elon Musk's AI venture since launching its controversial 'Spicy' mode earlier this month.
The unprecedented coalition includes heavy-hitters like the Tech Oversight Project, Center for Economic Justice, and the Electronic Privacy Information Center (EPIC). Their 14-organization alliance directly cites The Verge's testing that revealed Grok generating topless videos of Taylor Swift without any explicit prompts – a discovery that sent shockwaves through AI safety circles.
"The generation of such videos can have harmful consequences for those depicted and for under-aged users," the organizations wrote in their formal complaint. The letter warns that if xAI removes current limitations preventing users from uploading real photos for 'Spicy' mode processing, it "would unleash a torrent of obviously nonconsensual deepfakes."
The regulatory pressure comes at a critical moment for the AI industry, as lawmakers scramble to address deepfake abuse. While AI-generated intimate imagery of real people violates the , legal experts told those provisions likely won't apply to Grok's current implementation.