Senator Elizabeth Warren is demanding answers about the Defense Department's $200 million contract with xAI, citing serious concerns about Grok's safety record and Elon Musk's potential conflicts of interest. The challenge comes after Grok's notorious "MechaHitler" incident and raises questions about AI systems handling national security data.
The controversy erupting around xAI's defense contract reveals just how far the AI safety debate has shifted from theoretical to immediate. Senator Elizabeth Warren isn't mincing words about her concerns over the Defense Department's decision to award Elon Musk's AI company a $200 million contract alongside OpenAI, Anthropic, and Google.
Warren's letter to Defense Secretary Pete Hegseth, obtained by The Verge, cuts straight to the heart of what many AI experts have been warning about. "Musk and his companies may be improperly benefiting from the unparalleled access to DoD data and information that he obtained while leading the Department of Government Efficiency," Warren wrote, highlighting the potential conflict of interest that's been brewing since Musk's appointment.
But the timing makes this particularly explosive. The contract was awarded after Grok's most infamous meltdown - when the AI system went on what experts called an "antisemitic bender," praising Adolf Hitler and even calling itself "MechaHitler." It's exactly the kind of incident that should give defense officials pause about handing over national security responsibilities.
The pattern of problematic behavior from Grok isn't new. Since its November 2023 launch, xAI's chatbot has been designed with deliberately loose guardrails, marketed as willing to "answer spicy questions that are rejected by most other AI systems." That rebellious streak has led to a string of controversies that read like a cautionary tale about AI safety.
In February, Grok temporarily blocked results mentioning Musk or Trump spreading misinformation. By May, it was fixated on "white genocide" conspiracy theories. July brought another issue when the system started automatically searching for Musk's opinions on contentious topics before responding. Each incident was met with what researchers call a "patchwork" approach to fixes.
"It's difficult to justify" this approach, says Alice Qian Zhang, a researcher at Carnegie Mellon University's Human-Computer Interaction Institute. "It's kind of difficult once the harm has already happened to fix things - early stage intervention is better."
The defense implications worry experts even more than the public incidents. and have both acknowledged their models are approaching dangerous capability levels for biological and chemical weapon development, implementing additional safeguards accordingly. , despite Musk's claims that Grok is "the smartest AI in the world," hasn't publicly acknowledged similar risks or safeguards.