The Justice Department just fired back at Anthropic's lawsuit, claiming the AI safety company can't be trusted with defense contracts after trying to limit how the military uses its Claude models. In a legal filing that escalates one of the tech industry's most contentious battles over AI ethics versus national security, the government argues it lawfully penalized Anthropic for attempting to restrict warfighting applications. The response sets a critical precedent that could reshape how AI companies negotiate with federal agencies.
Anthropic thought it could have it both ways - selling AI to the government while keeping control over how it's used in combat. The Justice Department just told a federal court that's not how defense contracts work.
In a newly filed legal response, DOJ attorneys argue the government acted lawfully when it penalized Anthropic for attempting to impose usage restrictions on its Claude AI models. The filing, first reported by WIRED, represents the government's first detailed defense since Anthropic sued over contract penalties earlier this year.
The core issue? Anthropic wanted to prevent its Claude models from being used in lethal autonomous weapons systems and certain offensive military operations. The Department of Defense apparently had different plans. According to the government's filing, when Anthropic tried to enforce these limitations mid-contract, officials determined the company couldn't be trusted with sensitive warfighting systems that require unrestricted AI capabilities.
"Vendors seeking to provide AI capabilities to national security agencies must accept that mission requirements, not corporate ethics statements, dictate how those tools are deployed," the filing states. It's a blunt assertion that cuts to the heart of the tech industry's ongoing reckoning with military partnerships.
Anthropic built its reputation on AI safety, positioning itself as the responsible alternative to rivals like OpenAI and . The company's "Constitutional AI" approach and published usage policies explicitly prohibit weapons development and military harm. But those principles collided with Pentagon expectations when Anthropic began pursuing lucrative government contracts last year.












