The Pentagon just dropped the hammer on Anthropic, formally designating the AI startup as a supply chain risk and barring defense contractors from using its Claude models. The unprecedented move comes as evidence emerges that Claude is being used inside Iran, marking a dramatic escalation in Washington's scrutiny of AI companies' global reach. Defense vendors will now need to certify they're not using Anthropic's technology, effectively locking the company out of billions in government contracts while rivals like OpenAI gain ground.
The Department of Defense just made it official: Anthropic is now considered a supply chain risk, a designation that will ripple through the entire defense-industrial complex and reshape how AI gets deployed in national security contexts.
According to the formal declaration reported by CNBC, defense vendors and contractors working with the Pentagon will be required to certify that they're not using Anthropic's models in any capacity. It's a stunning rebuke for a company that's raised over $7 billion from investors including Google and Salesforce Ventures, and it effectively slams the door on what could have been a massive revenue stream.
The timing tells the story. The designation comes as intelligence has surfaced showing Claude, Anthropic's flagship AI assistant, is being actively used inside Iran. While the exact nature and scale of that usage remains unclear, the mere presence of advanced AI capabilities in a sanctioned adversarial nation was apparently enough to trigger the Pentagon's risk calculus. It's the kind of scenario that keeps national security officials up at night: cutting-edge AI technology, developed with American capital and expertise, potentially supporting activities contrary to U.S. interests.
This isn't just bureaucratic box-checking. The supply chain risk designation carries real teeth. Defense contractors, from prime integrators like Lockheed Martin and Northrop Grumman down to smaller specialized vendors, will need to audit their AI toolchains and verify they're Anthropic-free. That's no small task in an era where AI models get embedded in everything from logistics software to intelligence analysis tools. The certification requirement essentially creates a compliance minefield that most contractors will navigate by simply avoiding Anthropic entirely.












