A group of former defense and intelligence officials is calling on Congress to investigate the Pentagon's controversial decision to label Anthropic as a supply chain risk, marking an unprecedented pushback against the Department of Defense's recent move that has already forced government contractors to drop the AI company's Claude chatbot. The letter, sent to key Congressional committees, warns the designation sets a dangerous precedent that could undermine America's AI competitiveness at a critical moment in the global tech race.
The Pentagon's quiet addition of Anthropic to its supply chain risk list just triggered a political firestorm. Former defense and intelligence officials are now publicly challenging the Department of Defense, urging Congress to investigate what they're calling an unprecedented and potentially damaging move against one of America's leading AI companies.
The letter, sent to Congressional oversight committees this week, represents a rare public split between former national security insiders and current Pentagon leadership. According to sources familiar with the matter, the signatories include veterans of multiple administrations who worry the DoD's decision could backfire spectacularly, pushing cutting-edge AI capabilities away from government use just as China accelerates its own military AI programs.
The controversy erupted after the Pentagon quietly added Anthropic to its supply chain risk designation list, effectively banning government contractors from using the company's Claude AI assistant. The move sent shockwaves through the defense tech ecosystem, with contractors scrambling to rip out Claude integrations and switch to alternatives. Some companies told CNBC they received less than 72 hours notice to comply or risk losing federal contracts worth millions.
What makes this case unusual is Anthropic's profile. Unlike Chinese-owned companies that have faced similar restrictions, Anthropic is a San Francisco-based company backed by Google, Amazon, and prominent Silicon Valley investors. The company has positioned itself as a leader in AI safety, often citing its constitutional AI approach designed to make models more controllable and aligned with human values.












