The Pentagon has a contradiction on its hands. Just days after officially designating Anthropic a supply-chain risk, the Department of Defense is still actively deploying the company's Claude AI model in military operations in Iran, according to Palantir CEO Alex Karp. The revelation exposes a widening gap between Washington's national security rhetoric and the operational reality of AI procurement, where battlefield demands are clashing with geopolitical policy.
The Pentagon designated Anthropic a supply-chain risk last week in a move that should have immediately frozen the AI startup out of defense contracts. But according to Palantir CEO Alex Karp, the Department of Defense is still running Claude AI models in active military operations in Iran. The disconnect between policy and practice reveals just how dependent the U.S. military has become on cutting-edge commercial AI, even when that technology comes with national security red flags.
Karp's comments, reported by CNBC, underscore the messy reality of defense procurement in the AI era. Pentagon leadership can issue blacklist designations, but when commanders in the field are relying on Claude's language processing capabilities for intelligence analysis and operational planning, flipping the off switch isn't simple. The Iran operations represent one of the most sensitive and high-stakes military engagements currently underway, making the continued use of flagged technology all the more striking.
The supply-chain designation stems from Anthropic's funding structure. The company has taken significant investment from Chinese entities, raising concerns within the Pentagon about potential data exposure or influence operations. Defense officials have been increasingly vocal about the risks of foreign capital flowing into foundational AI companies, especially those with access to classified or sensitive military applications. But those policy concerns are now colliding head-on with operational necessity.
Palantir itself sits at the center of this tension. The defense software giant integrates multiple AI models into its platforms, offering military and intelligence customers access to various large language models depending on the task. Claude has emerged as a preferred option for certain natural language processing workflows due to its performance and longer context windows compared to competitors. Switching to alternative models mid-operation could mean degraded capability or costly retraining of military personnel already deployed.










