UC San Diego's Hao AI Lab just got its hands on one of NVIDIA's most powerful systems, and it's already reshaping how researchers think about serving large language models in production. The Hao AI Lab, which has been quietly influencing how companies like NVIDIA architect their AI infrastructure, is now using the DGX B200 to push the boundaries of fast, low-latency AI responses. The research coming out of UC San Diego doesn't stay in the lab—it's already powering real-world systems.
The Hao AI Lab at UC San Diego just leveled up its research capabilities with access to one of NVIDIA's most powerful AI systems. The team received a DGX B200 that's now housed at the university's San Diego Supercomputer Center, giving researchers immediate access to enterprise-grade computing power most academic labs can only dream about.
Here's what makes this significant: the Hao AI Lab isn't just consuming AI infrastructure—it's literally designing how AI infrastructure should work. The lab's research on DistServe, a novel approach to serving large language models, directly influenced the architecture of NVIDIA Dynamo, an open-source framework now deployed in production systems worldwide. This new DGX B200 is essentially giving the team the hardware to push that research even further.
"DGX B200 is one of the most powerful AI systems from NVIDIA to date, which means that its performance is among the best in the world," said Hao Zhang, an assistant professor at UC San Diego's Halıcıoğlu Data Science Institute. "It enables us to prototype and experiment much faster than using previous-generation hardware." Translation: the team can now test more ideas, faster, with more compute—which is exactly what drives breakthrough research.












