OpenAI just locked down the memory supply chain for its ambitious Stargate project, striking deals with Samsung and SK Hynix to produce 900,000 high-bandwidth DRAM chips monthly. The agreements, signed during a high-level Seoul summit with South Korea's president, more than double current industry capacity and cement OpenAI's hardware supply for its $500 billion AI infrastructure push.
OpenAI isn't just building the future of AI - it's securing the hardware supply chain to power it. The ChatGPT maker announced Wednesday it has struck agreements with memory giants Samsung Electronics and SK Hynix to manufacture DRAM wafers for the massive Stargate AI infrastructure project.
The deals came together during a Seoul summit between OpenAI CEO Sam Altman, South Korean President Lee Jae-myung, Samsung's executive chairman Jay Y. Lee, and SK chairman Chey Tae-won. It's the kind of government-level meeting that signals just how crucial memory chips have become in the AI arms race.
Under the agreements, Samsung and SK Hynix will scale production to churn out up to 900,000 high-bandwidth memory DRAM chips monthly specifically for Stargate and AI data centers. According to SK Group's statement, this represents more than double the current global industry capacity for high-bandwidth memory chips.
The timing couldn't be more critical. Stargate is OpenAI's $500 billion joint venture with Oracle and SoftBank to build AI-dedicated data centers across the United States. But even the most powerful GPUs are useless without the high-speed memory to feed them data.
"OpenAI is leaving few stones unturned in the race to build compute capacity for its AI efforts," the company noted in announcing the Korean partnerships. The statement reveals just how aggressive OpenAI has become in securing its hardware supply chain ahead of competitors.
Wednesday's memory chip deals cap off what can only be described as a frenetic month of infrastructure investment. Just two weeks ago, Nvidia announced plans to invest up to $100 billion in OpenAI, giving the AI company access to over 10 gigawatts of compute capacity through Nvidia's training systems. The following day, OpenAI revealed it would build five new Stargate data centers with SoftBank and Oracle, targeting 7 gigawatts of total compute capacity.