AMD and Meta have entered a multi-year, multi-generation agreement that could see Meta deploy up to 6 gigawatts of AMD AI compute across its next wave of data centers. The deal, one of the largest disclosed AI chip partnerships to date, underscores the hyperscaler push to diversify beyond Nvidia as demand for AI infrastructure accelerates.
Under the agreement, Meta will deploy AMD Instinct GPUs alongside 6th-generation EPYC “Venice” CPUs using AMD’s Helios rack-scale architecture. The rollout will begin with a custom Instinct part derived from the MI450 platform and expand over multiple product generations.
The first 1 gigawatt of systems is expected to begin shipping in the second half of 2026, with additional phases rolling out over several years. Analysts estimate that the full 6 GW commitment could translate to roughly 2.4 million to 3 million GPUs deployed by around 2030, depending on final configurations.
The scale places the deal among the most significant long-term infrastructure bets in the current AI build-out cycle.
Meta will serve as a lead customer for a customized Instinct GPU tuned specifically for its AI training and inference requirements. The agreement also includes future EPYC server processors, including Venice and a follow-on chip codenamed Verano.
Meta already uses EPYC CPUs and MI300-series accelerators in parts of its infrastructure. The new deal deepens that relationship by aligning roadmaps across multiple hardware generations.
AMD says the Helios rack-scale platform is designed to support large-scale AI clusters with high bandwidth and efficient interconnects, positioning the stack for frontier-model workloads.
The partnership comes shortly after Meta reiterated plans to deploy millions of Nvidia GPUs. By expanding its AMD footprint, the company is explicitly working to avoid reliance on a single supplier as AI demand continues to surge.
Meta CEO Mark Zuckerberg said the collaboration will support the company’s long-term push toward what he describes as personal superintelligence, while also building a more resilient and flexible compute foundation.
Industry analysts cited by CNBC estimate the agreement could be worth tens of billions of dollars over at least four years, given the scale implied by a 6 GW deployment.

As part of the deal, AMD has issued Meta a performance-based warrant covering up to 160 million AMD shares. The equity vests in tranches tied to deployment milestones rather than upfront commitments.
The first tranche will vest once 1 GW of systems has shipped. Additional tranches are linked to Meta scaling purchases toward the full 6 GW target, along with share price and commercial performance conditions.
The structure is designed to align incentives over the multi-year build-out rather than treating the agreement as a one-time hardware sale.
AMD CFO Jean Hu said the partnership is expected to drive substantial multi-year revenue growth and be accretive to non-GAAP earnings per share. The announcement was viewed by market observers as a significant validation of AMD’s AI accelerator roadmap.
AMD shares rose following the news, with analysts describing the deal as evidence that hyperscalers are increasingly willing to treat AMD as a serious second source for large-scale AI infrastructure.
The agreement highlights a broader shift underway in the AI hardware market. While Nvidia remains the dominant supplier, hyperscale customers are actively seeking alternative vendors to reduce supply risk and pricing pressure.
Whether AMD can close the performance and ecosystem gap at scale remains a key open question. Software maturity, networking efficiency, and developer adoption will play major roles alongside raw silicon capability.
Still, the Meta partnership signals that large cloud players are willing to commit meaningful capacity to diversify their AI stacks. If execution stays on track, the deal could mark one of the most consequential competitive moves in the current AI infrastructure cycle.
Discussion