The AI industry's ethics era is over. OpenAI and Anthropic - companies that built their reputations on safety-first approaches - have quietly signed $200 million Pentagon contracts each, marking a dramatic pivot from their founding principles. Former OpenAI safety engineer Heidy Khlaaf warns this rush to militarize AI threatens global security.
The transformation happened quietly, but its implications are staggering. In early 2024, OpenAI scrubbed the words "military and warfare" from its prohibited use cases - a single line deletion that opened the floodgates to a $200 million Department of Defense contract signed just months later. The company that once positioned itself as AI's ethical guardian now partners with Anduril, a defense contractor specializing in autonomous weapons systems.
Anthropic followed suit with its own $200 million Pentagon deal, despite building its entire brand around constitutional AI and safety research. The Claude maker's partnership with Palantir allows its models to power US defense and intelligence operations - a stark departure from the company's cautious public messaging about AI risks.
Heidy Khlaaf saw this coming. As a senior systems safety engineer at OpenAI from late 2020 to mid-2021, she helped develop safety frameworks for the company's Codex coding tool during what she calls a "critical time." Now chief AI scientist at the AI Now Institute, Khlaaf warns that leading AI companies are being "far too cavalier about deploying generative AI in high-risk scenarios."
The numbers tell the story of an industry pivot. Beyond the headline-grabbing OpenAI and Anthropic contracts, Amazon, Google, and Microsoft are all pushing AI products for defense and intelligence use. Google recently abandoned its promise not to develop AI weapons, while Microsoft faces growing employee protests over its Israel-related contracts.
What's driving this sudden embrace of military money? Industry insiders point to the enormous costs of training frontier AI models and the relatively small pool of customers willing to pay premium prices. Pentagon contracts offer guaranteed revenue streams that consumer markets can't match, especially as AI companies race to achieve artificial general intelligence.
But Khlaaf's concerns go deeper than corporate strategy. She points to a fundamental risk that these same AI systems could enable bad actors to develop chemical, biological, radiological, and nuclear weapons - threats the AI companies themselves acknowledge in their safety research. The military applications being developed today create blueprints that hostile nations or terrorist organizations could replicate.