Trending: AI Tools, Social Media, Reviews

News

MatX Raises $500M to Build LLM Chip Challenger to Nvidia

Vivek Gupta
Published By
Vivek Gupta
Updated Feb 25, 2026 4 min read
MatX Raises $500M to Build LLM Chip Challenger to Nvidia

MatX, an AI chip startup founded by former Google TPU engineers, has raised more than $500 million in a Series B round to develop a dedicated large language model accelerator it claims could significantly outperform today’s GPU-based systems. The funding, announced Tuesday, highlights growing investor interest in alternatives to Nvidia’s dominance in AI compute.

High-stakes funding round

The Series B was led by Jane Street and Situational Awareness, the investment fund launched by former OpenAI researcher Leopold Aschenbrenner. Additional backers include Marvell Technology, NFDG, Spark Capital, and Stripe co-founders Patrick and John Collison. Angel investors such as Nat Friedman and Daniel Gross were also reported to be involved.

MatX did not disclose its latest valuation but said it now sits in the multi-billion-dollar range. The raise follows the company’s roughly $100 million Series A in 2024, led by Spark Capital, which valued the startup above $300 million at the time.

The round arrives amid a surge of venture funding into AI chip startups seeking to reduce reliance on Nvidia hardware, which continues to face supply constraints and high pricing pressure.

Built by former Google TPU engineers

MatX was founded in 2023 by Reiner Pope and Mike Gunter, both veterans of Google’s TPU program. Pope previously led AI software efforts for Google’s in-house accelerators, while Gunter worked as a lead hardware designer on TPU systems.

The pair are now building what they describe as an LLM-optimized processor called MatX One. Unlike inference-only chips from some competitors, the company says its architecture is designed to handle both training and inference workloads, including pre-training, reinforcement learning, prefill, and decode stages.

Pope has described the design philosophy as combining low-latency SRAM-first architecture with high-bandwidth memory support for long context windows, along with custom numerical formats tuned specifically for LLM workloads.

Chip startup MatX raises $500M to speed up large language models -  SiliconANGLE

Ambitious performance targets

MatX is positioning its chip directly against Nvidia GPUs, claiming its system-level performance could be up to ten times faster for large language model training and inference. The company says the MatX One aims to deliver higher throughput on LLM workloads than any currently announced system while maintaining low latency.

If those claims hold in production environments, the chip could address a major bottleneck facing AI labs and cloud providers. Many organizations continue to struggle with limited access to high-end GPUs as demand for large-scale model training accelerates.

Analysts note that several startups, including Etched, Groq, d-Matrix, and SambaNova, are pursuing specialized AI silicon. MatX’s approach differs in targeting a more general-purpose LLM accelerator rather than focusing solely on inference or narrow operations.

Manufacturing and timeline plans

The new capital will primarily fund the finalization and tape-out of the MatX One design, which the company aims to complete within about a year. MatX is also working to secure manufacturing capacity with TSMC, a critical step given ongoing pressure on advanced semiconductor supply chains.

Beyond chip production, the company plans to build large-scale clusters composed of hundreds of thousands of accelerators to serve frontier AI customers. The target market includes major AI labs and cloud providers that are currently constrained by GPU availability.

MatX expects to begin volume production and initial shipments around 2027, placing it in the next wave of purpose-built AI hardware entering the market.

A growing challenge to Nvidia’s grip

The funding round underscores how aggressively investors are backing potential Nvidia challengers. One report noted that MatX captured the largest share of roughly $1.1 billion in AI chip venture funding announced in the same week.

Whether MatX can translate architectural promises into real-world performance remains an open question. The AI chip market has seen ambitious claims before, and large-scale deployment success often depends on software ecosystem maturity as much as raw silicon capability.

Still, with experienced TPU veterans at the helm and significant capital now secured, MatX has positioned itself as one of the more closely watched entrants in the race to build next-generation infrastructure for large language models.