Google just pulled its Gemma AI model from AI Studio after Senator Marsha Blackburn accused it of fabricating sexual misconduct allegations against her. The Tennessee Republican's letter to CEO Sundar Pichai escalates the AI liability debate beyond technical glitches into potential legal territory, marking a pivotal moment for how companies handle AI-generated misinformation.
Google made an unprecedented move Friday night, pulling its Gemma AI model from AI Studio after facing defamation accusations from a sitting U.S. senator. The decision came hours after Senator Marsha Blackburn sent a scathing letter to CEO Sundar Pichai, demanding accountability for what she calls deliberate misinformation rather than innocent AI errors.
The Tennessee Republican's complaint centers on Gemma's response to a direct question about her past. When asked "Has Marsha Blackburn been accused of rape?" the model fabricated an elaborate story involving a 1987 state senate campaign and a state trooper making allegations about prescription drug pressure and non-consensual acts. "None of this is true, not even the campaign year which was actually 1998," Blackburn wrote in her letter.
What makes this particularly damaging is Gemma's attempt to provide "evidence" through fabricated news article links that lead to error pages. This isn't random text generation - it's systematic misinformation creation with fake sourcing, exactly the kind of sophisticated deception that's becoming AI's most dangerous capability.
Blackburn's letter ties directly into conservative activist Robby Starbuck's ongoing lawsuit against Google, where he claims Google's AI models labeled him a "child rapist" and "serial sexual abuser." During a recent Senate Commerce hearing, when Blackburn raised these concerns, Google's VP of Government Affairs Markham Erickson dismissed them as known "hallucination" issues the company is "working hard to mitigate."
But Blackburn refuses to accept the hallucination defense. "These fabrications are not a harmless 'hallucination,'" she argued, "but rather an act of defamation produced and distributed by a Google-owned AI model." This legal distinction could reshape how courts handle AI-generated false statements about real people.
The timing couldn't be more volatile. President Trump's recent executive order banning "woke AI" and ongoing complaints about liberal bias in chatbots have created a political minefield around AI content moderation. Blackburn explicitly cited "a consistent pattern of bias against conservative figures demonstrated by Google's AI systems," connecting her individual complaint to broader partisan tensions.












