A federal judge isn't buying the Pentagon's case against Anthropic. During a heated Tuesday hearing, the district court questioned why the Department of Defense labeled the Claude AI developer a supply-chain risk, calling the government's move an apparent "attempt to cripple" the company. The rare judicial pushback could reshape how Washington regulates AI companies and signals growing skepticism about national security claims used to restrict American tech firms.
The courtroom drama unfolding in federal district court this week puts Anthropic at the center of a high-stakes battle over AI regulation and government power. The judge's pointed questioning during Tuesday's hearing suggests the Department of Defense may struggle to justify why it designated the maker of Claude - one of the most widely-used AI assistants - as a threat to national security.
The "troublesome" characterization from the bench represents a significant setback for the Pentagon's case. Federal judges rarely use such critical language during preliminary hearings, especially in matters involving defense and national security. The comment signals the court sees potential overreach in the government's attempt to restrict Anthropic from federal procurement and potentially broader commercial activities.
Anthropic has maintained its position as one of the leading AI safety-focused companies since its 2021 founding by former OpenAI executives. The company's Claude models compete directly with ChatGPT and have been adopted by major enterprises including Amazon, which invested $4 billion in the startup last year. That investment now hangs in the balance if the supply-chain designation stands.
The Pentagon's motivation for targeting Anthropic remains murky. Unlike cases involving Chinese tech firms like Huawei, where foreign government ties drove security concerns, Anthropic is a Delaware-incorporated company founded by American AI researchers. The supply-chain risk framework typically applies to companies with adversarial foreign ownership or operations in restricted jurisdictions - categories that don't obviously fit Anthropic's profile.












