Nvidia invests $2 billion to accelerate CoreWeave's AI data centers
Nvidia's $2 billion stock purchase backs CoreWeave's plan to add more than 5 gigawatts of AI capacity by 2030 and secures early deployment of upcoming products.

Nvidia paid $2 billion to acquire Class A common stock in CoreWeave at $87.20 per share, deepening a strategic partnership intended to speed the buildout of more than 5 gigawatts of AI-optimized data-center capacity by 2030. The deal, announced Jan. 26, 2026, gives Nvidia a substantial equity position in a specialist cloud provider while securing commitments for early deployment of upcoming Nvidia products at CoreWeave facilities.
The purchase equates to roughly 22.94 million shares and represents a rare direct-capital commitment from a leading chipmaker into a cloud operator. For CoreWeave, the injection of cash and formalized ties to Nvidia are intended to underwrite an aggressive expansion of GPU-heavy capacity tailored for large language models, generative AI services, training farms and other compute-intensive workloads. For Nvidia, the transaction helps lock in demand and channels installations of future hardware to a ready customer with nationwide capacity plans.
The scale of the planned expansion is notable. More than 5 gigawatts of data-center capacity implies a major increase in power draw and cooling infrastructure at hyperscale levels. Building that capacity by 2030 requires not only servers and accelerators but also substantial electrical hookups, substation work and long-term power contracts. CoreWeave will need to secure land, permits and grid agreements as it sites new campuses, an undertaking that will attract scrutiny from utility regulators and communities concerned about local impacts and emissions.
Industry analysts said the transaction illustrates how chip suppliers and specialized cloud providers are forging tighter commercial links to meet an escalation in demand for accelerated compute. By ensuring early deployment of upcoming products, Nvidia gains a channel to get its newest platforms into production environments quickly, which can shape customer preference and set performance benchmarks. CoreWeave gains privileged access to inventory and technical integration support at a time when supply chain constraints and high demand have prioritized customers with close vendor relationships.

The deal also raises questions about competition and market concentration. As cloud incumbents and dedicated GPU clouds race to offer the lowest-latency, highest-throughput environments for AI workloads, exclusive or prioritized supply arrangements could influence pricing, availability and the pace of innovation across the ecosystem. Regulators will likely watch whether such partnerships limit options for enterprises seeking alternative hardware providers or independent cloud suppliers.
Sustainability and grid resilience will be central issues for policymakers and the broader public. A multigigawatt expansion dedicated to AI workloads will likely accelerate conversations about renewable energy procurement, demand response and on-site efficiency measures. How CoreWeave balances performance with carbon intensity will affect both its operating costs and public acceptance.
The transaction signals a maturing market for specialized AI infrastructure in which silicon vendors not only sell chips but also take financial stakes in the systems that consume them. The outcome will matter for research labs, startups and enterprises that depend on timely access to top-tier accelerators, and for communities and regulators confronting the infrastructure demands of a rapidly expanding AI economy.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

