Business

Meta taps Amazon Graviton cores for AI agent workloads

Meta is moving tens of millions of AWS Graviton CPU cores into its AI stack, betting that agentic workloads need more than GPUs. The deal strengthens Amazon’s chip push and widens the race beyond Nvidia.

Marcus Williams2 min read
Published
Listen to this article0:00 min
Share this article:
Meta taps Amazon Graviton cores for AI agent workloads
AI-generated illustration

Meta has signed up for tens of millions of Amazon Web Services Graviton cores, a move that puts CPU demand at the center of the next phase of the AI infrastructure race. Meta said on April 24, 2026, that it is partnering with AWS to bring the ARM-based chips into its compute portfolio, making the company one of the largest Graviton customers in the world.

The signal matters because Graviton is not a GPU story. Meta said the new capacity is aimed at agentic AI workloads, the kind of systems that need real-time reasoning, coding, search and multi-step task coordination. In Meta’s view, no single chip architecture can efficiently handle every workload. AWS said Graviton5 cores are built for those demands, and Meta said the first deployment would start with tens of millions of cores, with room to expand as its AI capabilities grow.

The deal also shows how cloud economics are changing. For years, the market has treated Nvidia GPUs as the default engine of AI. Meta’s decision suggests that large-scale AI systems will increasingly rely on a mix of custom CPUs, accelerators and GPUs, depending on whether the job is training, inference or orchestration. That shift could push more spending toward specialized cloud hardware and make custom silicon a bigger battleground for margins and customer lock-in.

For Amazon, the agreement is a marquee win. TechCrunch reported that Meta had previously signed a six-year, $10 billion cloud deal with Google Cloud last August, but the new AWS arrangement brings a larger share of Meta’s AI spend back to Amazon’s infrastructure. It also bolsters Amazon’s case that its homegrown chips can compete for the workloads that matter most to AI builders, not just the ones that dominate headlines.

Related stock photo
Photo by panumas nikhomkhai

The timing sharpened that message. On April 20, 2026, Amazon and Anthropic expanded their partnership, with Anthropic committing more than $100 billion over 10 years to AWS technologies and securing up to 5 gigawatts of capacity across current and future generations of Amazon custom silicon, including Trainium and tens of millions of Graviton cores. Anthropic said it currently uses more than one million Trainium2 chips and expects nearly 1 gigawatt of Trainium2 and Trainium3 capacity to come online by the end of 2026, with expansion in Asia and Europe.

Amazon has said demand for its custom silicon is running hot. Andy Jassy recently said two large AWS customers asked to buy all Graviton instance capacity available in 2026, and Amazon turned them down because broader demand was stronger. Taken together, the Meta and Anthropic deals show a market that is no longer organized around GPUs alone. The new contest is over who supplies the full stack of AI compute, and Amazon is pressing hard to prove that its own chips belong at the center of it.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Business