Qualcomm just fired a shot across Nvidia's bow, announcing two new AI inference chips that repurpose the company's mobile neural processing technology for data centers. The AI200 launches next year, followed by the AI250 in 2027, marking Qualcomm's boldest move yet into the lucrative AI chip market that Nvidia currently dominates.
Qualcomm is making its most aggressive play yet against Nvidia's AI chip empire. The mobile processor giant announced Monday it's launching the AI200 chip next year and the AI250 in 2027, both built on the company's Hexagon neural processing units that already power AI features in smartphones and laptops.
The move represents a fascinating role reversal in the semiconductor world. While companies have been adapting GPU technology for mobile devices, Qualcomm is doing the opposite - scaling up mobile-first AI processing for rack-scale data centers. According to CNBC's reporting, these processors can work in configurations of up to 72 chips functioning as a single computer, similar to how Nvidia and AMD deploy their GPUs.
The timing couldn't be more strategic. As AI inference costs become a major concern for enterprises deploying large language models, Qualcomm is positioning itself as the efficiency-focused alternative to Nvidia's training-oriented chips. The AI200 packs 768GB of RAM optimized specifically for AI inference workloads, while the AI250 promises what Qualcomm calls "a generational leap in efficiency" with much lower power consumption.
This isn't just theoretical competition. Saudi Arabia's Humain, backed by the kingdom's Public Investment Fund, has already committed to using both chips in computing systems across the region. The partnership builds on an existing agreement to develop AI data centers throughout Saudi Arabia, giving Qualcomm a guaranteed customer for its inaugural data center chips.
What makes this launch particularly intriguing is Qualcomm's mobile heritage. The company's Hexagon neural processing units have been quietly powering AI features in Snapdragon mobile chips and laptop processors for years. Now they're scaling that same architecture for enterprise workloads, potentially offering a more power-efficient approach than traditional GPU-based solutions.









