CoreWeave's post-IPO honeymoon just hit a wall. The AI infrastructure darling saw shares tumble 8% in after-hours trading after delivering a first-quarter revenue forecast that fell short of Wall Street's expectations, marking one of the stock's steepest drops since its public debut. The miss signals potential headwinds in the red-hot GPU cloud market despite the company's strong Q4 backlog performance.
CoreWeave just gave investors a reality check on AI infrastructure growth. The specialized cloud provider's shares slid 8% in extended trading after the company delivered first-quarter revenue guidance that missed Wall Street's mark, according to CNBC.
The timing couldn't be more awkward. CoreWeave has positioned itself as the go-to infrastructure provider for AI companies that need massive GPU compute power but don't want to wait in Nvidia's endless queue. The company's rapid ascent culminated in a blockbuster IPO that valued the business at billions, riding the wave of AI infrastructure mania that's gripped markets since ChatGPT's breakout.
But the disappointing forward guidance suggests that even in the AI gold rush, not everyone's striking it rich on schedule. The gap between analyst expectations and CoreWeave's actual forecast points to potential softness in near-term demand, or perhaps more conservative deal timing as enterprise customers become pickier about their cloud spending.
What makes this miss particularly notable is the contrast with CoreWeave's Q4 performance. The company reportedly showed strong backlog numbers for the fourth quarter of 2025, indicating healthy future demand. That disconnect between solid backlog and weak Q1 guidance hints at timing issues rather than fundamental demand problems, but Wall Street isn't taking chances.
The broader AI infrastructure sector has been on a tear, with companies like CoreWeave benefiting from the insatiable appetite for GPU compute among AI developers. But cracks are starting to show. Major AI labs are increasingly focused on efficiency and cost optimization rather than pure scale. , , and other leading AI companies have all talked publicly about making their models more efficient, which could translate to slower growth in raw compute demand.












