The Pentagon found a workaround. Sources tell Wired that the Defense Department tested OpenAI's technology through Microsoft's Azure cloud platform while the AI company still banned military applications, raising serious questions about policy enforcement and corporate oversight in the rush to deploy AI. The revelation comes as OpenAI now openly courts defense contracts, but the timing suggests the military was experimenting with ChatGPT's underlying models well before the company's January 2024 policy reversal.
OpenAI built its brand on responsible AI development. The company's use policy explicitly banned military applications for years. But according to sources speaking to Wired, the Pentagon was already testing the company's models through a convenient loophole: Microsoft.
The Defense Department allegedly experimented with Microsoft's Azure-hosted version of OpenAI technology while the ChatGPT maker's military prohibition remained in place. It's a classic case of policy arbitrage - what OpenAI banned directly, the military apparently accessed indirectly through the company's biggest investor and deployment partner.
Microsoft has poured $13 billion into OpenAI since 2019, securing exclusive cloud hosting rights and enterprise distribution through Azure. That arrangement gave the tech giant its own licensing terms for OpenAI models, terms that didn't necessarily mirror OpenAI's consumer-facing restrictions. For the Pentagon, that gap proved useful.
The allegations paint a messy picture of AI governance in practice. OpenAI publicly maintained its military use ban until January 2024, when the company quietly updated its usage policies to allow defense applications. CEO Sam Altman later confirmed the shift, stating that OpenAI would work with the U.S. government on cybersecurity and other national security projects.












