Cloud computing startup Lambda just landed a multibillion-dollar partnership with Microsoft to deploy AI infrastructure powered by tens of thousands of Nvidia chips. The deal marks a major escalation in the AI infrastructure arms race, with Lambda CEO Stephen Balaban calling it part of "probably the largest technology buildout we've ever seen." The timing couldn't be better - consumer demand for AI services like ChatGPT and Claude is driving unprecedented investment in computing power.
Lambda just dropped a bombshell that's sending ripples through the AI infrastructure world. The cloud computing startup announced a multibillion-dollar deal with Microsoft on Monday, securing access to tens of thousands of Nvidia chips for what CEO Stephen Balaban calls the "largest technology buildout we've ever seen."
The timing tells the whole story. Consumer appetite for AI-powered services has exploded, with millions now using ChatGPT, Claude, and other AI assistants daily. "The industry is going really well right now, and there's just a lot of people who are using these AI services," Balaban told CNBC's Money Movers on Monday. That surge in demand is forcing companies to scramble for computing power.
The partnership isn't exactly new territory for these companies - Lambda and Microsoft have been working together since 2018. But this deal represents a massive escalation, featuring Nvidia's cutting-edge GB300 NVL72 systems. These are the same high-performance chips that CoreWeave, the hyperscaler that's been making waves in AI infrastructure, has deployed across its network.
"We love Nvidia's product," Balaban said, cutting straight to the point. "They have the best accelerator product on the market." It's a sentiment that's become the industry standard - Nvidia's GPUs have become the gold standard for AI training and inference, driving their stock to record highs and making them one of the most valuable companies in the world.
Founded in 2012, Lambda has positioned itself perfectly for this AI boom. The company provides cloud services and software for training and deploying AI models, serving over 200,000 developers who need serious computing power. They're not just renting out servers - they're building the infrastructure that powers the AI revolution.
The scale of this buildout is staggering. Lambda operates dozens of data centers and isn't slowing down. Balaban revealed the company plans to continue both leasing existing facilities and constructing their own infrastructure. The strategy reflects the reality that traditional cloud providers can't keep up with AI's voracious appetite for computing resources.












