MatX has raised $500 million in Series B funding, led by Jane Street and Situational Awareness, a fund created by former Leopold Aschenbrenner.
The startup is building custom AI processors designed specifically for training large language models faster and more efficiently than general-purpose GPUs. MatX’s core ambition is aggressive: make its chips roughly 10× better at LLM training performance and inference efficiency compared to current leading hardware.
The company is part of a broader shift in AI infrastructure, where specialized chips are becoming as important as models themselves. Instead of relying on general-purpose computing hardware, MatX is designing silicon optimized for matrix math, parallel AI workloads, and long-context model training. In practice, that means tailoring chip architecture, memory access patterns, and data movement efficiency specifically for AI workloads rather than traditional computing tasks.
MatX was founded in 2023 by former Google TPU engineers. CEO Reiner Pope previously led AI software development for Google’s Tensor Processing Units, while co-founder Mike Gunter helped design TPU hardware. Their experience shows in the company’s approach: vertically integrating hardware and AI workload optimization rather than treating chips as generic computing components.
The new capital will be used to manufacture chips through TSMC, with production and commercial shipping targeted for 2027. MatX is entering a highly competitive AI chip race against other startups and incumbents building alternatives to GPUs, as demand for AI training compute continues to outpace supply.












