Nvidia is making a decisive push into open source AI, acquiring SchedMD, the company behind the ubiquitous Slurm workload management system, while simultaneously launching the Nemotron 3 family of open models. The twin moves signal Nvidia's strategy to control not just the chips powering AI, but also the critical software infrastructure developers depend on for training and deploying AI systems at scale.
Nvidia is making a power move through two channels. The semiconductor giant announced Monday it acquired SchedMD, the company that stewards Slurm, the open source workload management system that's been quietly running behind the scenes at nearly every AI data center, university lab, and research facility since 2002.
Slurm is one of those unglamorous but absolutely critical pieces of infrastructure that makes modern computing possible. It schedules and manages computational resources across clusters of machines, deciding which jobs run where and when. In the AI era, it's become essential for orchestrating massive training runs and inference workloads. SchedMD was founded in 2010 by Slurm's original creators Morris Jette and Danny Auble, with Auble currently serving as CEO.
Nvidia declined to disclose deal terms but signaled serious commitment. In its blog post announcing the acquisition, the company said it's been partnering with SchedMD for more than a decade and views Slurm as critical infrastructure for generative AI. Nvidia promised to keep the software open source and vendor-neutral while accelerating development and expanding its compatibility across different systems. This is important because it means Nvidia isn't locking Slurm into its ecosystem, at least not directly, which would have triggered serious backlash from the academic and research computing communities that depend on it.
But the SchedMD acquisition is only half the story. On the same day, Nvidia launched Nemotron 3, a new family of open source AI models the company claims represent the most efficient suite for building AI agents. This signals where Nvidia thinks AI is heading and what capabilities developers actually need.
The Nemotron 3 lineup comes in three flavors designed for different use cases. Nemotron 3 Nano targets focused tasks where smaller models make sense for inference efficiency. Nemotron 3 Super is built specifically for multi-agent systems where different AI models need to work together and coordinate. Nemotron 3 Ultra handles the heavier lifting for complex applications requiring more sophisticated reasoning.
"Open innovation is the foundation of AI progress," Nvidia CEO Jensen Huang said in the announcement. "With Nemotron, we're transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale." That last part matters. Huang is specifically calling out agentic systems, the autonomous AI agents that can plan, execute tasks, and iterate without constant human supervision. That's where Nvidia sees the next wave of valuable AI applications heading.












