U.S.

AMD and Meta ink 6 GW GPU deal with 160M-share warrant

AMD will supply Meta up to 6 gigawatts of Instinct GPUs and issued a performance-based warrant for up to 160 million AMD shares, with first shipments tied to 1 GW in 2H 2026.

Lisa Park3 min read
Published
Listen to this article0:00 min
Share this article:
AMD and Meta ink 6 GW GPU deal with 160M-share warrant
Source: images.hothardware.com

AMD and Meta announced a definitive multiyear, multi-generation infrastructure agreement under which Meta may deploy up to 6 gigawatts of AMD Instinct GPUs across its next-generation AI data centers, and AMD issued Meta a performance-based warrant for up to 160 million shares of AMD common stock. The press release was datelined SANTA CLARA, Calif. and MENLO PARK, Calif., and distributed Feb. 24, 2026.

Under the deal, the first tranche of the warrant vests when AMD ships the initial 1 gigawatt of Instinct GPU capacity, with additional tranches tied to purchases scaling to the full 6 GW. Vesting is also conditioned on AMD meeting certain stock price thresholds, and exercise of the warrant is tied to Meta satisfying technical and commercial milestones. Sources did not disclose exercise prices, tranche dollar values, specific tranche sizes beyond the 1 GW trigger, or a firm timeline to reach 6 GW.

AMD said the first-wave deployment will use a custom Instinct GPU based on the MI450 architecture, and shipments supporting the first 1 GW are expected to begin in the second half of 2026. The systems are described as built on AMD’s Helios rack-scale architecture, developed jointly with Meta through the Open Compute Project, and will use AMD’s ROCm software stack in conjunction with 6th Gen EPYC CPUs codenamed Venice. Meta will be a lead customer for Venice and for a next-generation EPYC processor called Verano, which AMD describes as offering workload-specific optimizations to deliver leadership "performance-per-dollar-per-watt."

AMD highlighted the companies’ roadmap alignment across silicon, systems and software as central to the partnership. In a statement included in the materials, Dr. Lisa Su, AMD chair and chief executive, said, "We are proud to expand our strategic partnership with Meta as they push the boundaries of AI at unprecedented scale. This multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimized for Meta’s workloads, accelerating one of the industry’s largest AI deployments and placing AMD at the center of the global AI buildout."

AI-generated illustration
AI-generated illustration

The announcement follows previous public commitments in the hyperscale market: Forbes noted that AMD’s work with Meta, together with an earlier OpenAI commitment to up to 6 GW beginning in 2026, marks some of the largest AI infrastructure commitments the company has received and elevates AMD’s role as hyperscalers diversify suppliers beyond Nvidia. Industry outlets such as InsideHPC framed the partnership as an important step for AMD in competing for large-scale AI deployments.

Observers say the deal underscores growing concentration of advanced AI infrastructure in a handful of technology platforms, raising questions about supply-chain execution, software maturity, and transparency around how massive compute capacity will be applied. The press materials note Meta has already deployed "millions" of AMD EPYC CPUs and significant Instinct MI300 and MI350 GPU deployments globally, but they do not include pricing, exclusivity terms for the MI450 custom design, or granular warrant mechanics.

Contacts listed in AMD’s distribution include Phil Hughes, AMD communications, phil.hughes@amd.com; Ashley Settle, Meta communications, ashleysettle@meta.com; and Liz Stine, AMD investor relations, liz.stine@amd.com.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in U.S.