A federal judge is pressing the Defense Department to justify its unprecedented decision to blacklist Anthropic as a national security risk - the first time an American company has faced this designation. The courtroom showdown, reported by CNBC, centers on whether the Pentagon's criteria for labeling the AI startup a supply chain threat meet even a basic legal standard. The judge's skepticism - captured in the pointed question "That seems a pretty low bar" - signals potential trouble for DOD's case and could reshape how the government regulates domestic AI companies.
The Defense Department is facing tough questions from the bench over its decision to brand Anthropic - maker of the Claude AI assistant - as a threat to U.S. national security. In what marks the first time an American company has received this designation, DOD added the San Francisco-based AI startup to its supply chain risk list, effectively blocking it from federal contracts and raising alarm bells across the tech industry.
During courtroom proceedings reported by CNBC, the presiding judge expressed visible skepticism about the Pentagon's justification. "That seems a pretty low bar," the judge remarked when DOD attorneys outlined their criteria for the designation. The pointed question cuts to the heart of Anthropic's legal challenge - whether the government applied rigorous standards or rushed to judgment against a domestic AI player.
The stakes couldn't be higher for Anthropic, which has positioned itself as a safety-focused alternative to OpenAI and has raised billions in funding from investors including Google and Spark Capital. A national security designation doesn't just lock the company out of lucrative defense contracts - it sends a chilling signal to enterprise customers who might view association with a blacklisted firm as risky. For a company built on trust and safety credentials, the label threatens core business relationships.












