OpenAI just made official what Wall Street suspected for months - a massive partnership with Broadcom to build 10 gigawatts of custom AI accelerators starting late 2026. The announcement sent Broadcom shares soaring over 10% in premarket trading, marking another seismic shift in the AI infrastructure wars as OpenAI diversifies beyond Nvidia's grip.
The semiconductor industry just got another jolt. OpenAI and Broadcom made official Monday what analysts have been whispering about since September - a partnership to jointly develop and deploy 10 gigawatts of custom AI accelerators that could reshape how the world's most valuable AI company scales its operations.
Broadcom shares exploded over 10% in premarket trading on the news, adding to what's already been a monster year for the chip giant. The stock has climbed 40% in 2025 after more than doubling in 2024, pushing the company's market cap past $1.5 trillion.
But this isn't just another tech partnership. It's OpenAI betting its future on custom silicon designed specifically for its workloads, potentially breaking free from the Nvidia chokehold that's defined the AI boom. "These things have gotten so complex you need the whole thing," CEO Sam Altman explained in a joint podcast the companies released with the announcement.
The scale is staggering. OpenAI currently operates on just over 2 gigawatts of compute capacity - enough to power ChatGPT's global user base and develop breakthrough models like Sora. This Broadcom deal alone would quintuple that capacity. But according to Altman's comments to CNBC, even 10 gigawatts is just the beginning.
"Even though it's vastly more than the world has today, we expect that very high-quality intelligence delivered very fast and at a very low price - the world will absorb it super fast," Altman said. The math is telling: OpenAI has announced roughly 33 gigawatts of compute commitments across partnerships with Nvidia, Oracle, AMD, and now Broadcom over the past three weeks alone.
The financial implications are massive. Industry estimates peg a 1-gigawatt data center at roughly $50 billion, with $35 billion typically allocated to chips based on Nvidia's current pricing. By designing its own silicon with Broadcom, OpenAI can slash these costs and stretch its infrastructure dollars significantly further.