Goldman Sachs says AI infrastructure faces grid, optics and cooling bottlenecks
Goldman’s latest AI thesis is less about demand and more about limits. Grid capacity, optics, copper and cooling could decide who captures the upside and who gets stranded.
.png&w=1920&q=75)
The constraint story behind AI infrastructure
Goldman Sachs is treating the AI buildout as a physical bottleneck story, not a simple scale story. The bank says 2027 AI server rack designs require 50x the power of the server racks that power the internet today, which is why data centers are starting to look less like IT facilities and more like industrial plants. That shift changes the economics for everyone touching the stack, from semis and networking to power equipment and project finance.
Why the grid is the first gatekeeper
The first hard limit is electricity. Goldman says the current grid was not designed for AI-era load growth after a decade of flat demand, and it warns that new natural-gas plants can take 5 to 7 years to come online. That timing mismatch is why hyperscalers and data-center operators are pursuing a mix of natural gas, renewables, behind-the-meter arrangements and long-term nuclear investments to lock in power before the next round of capacity gets trapped by interconnection queues and utility constraints.
For Goldman readers, this is not just a macro theme. It is the kind of infrastructure problem that can shape coverage in utilities, industrials, semiconductors and financing, and it can decide which clients get buildout capacity first. In a market where capacity itself is scarce, access to power becomes a competitive advantage.
The copper wall, optics shortages and cooling bottlenecks
Goldman’s “copper wall” thesis is where the story gets more granular. As AI factories become denser, traditional copper interconnects run into physical limits, pushing systems toward optics and other higher-performance networking gear. That transition matters because the firms that can supply high-speed optics, advanced cooling systems and transformers are not just selling parts; they are selling the ability to keep a rack alive at higher power density.
Cooling and electrical gear sit in the same pressure point. The harder each facility pushes on power, the more every extra watt demands specialized equipment, tighter engineering and longer lead times. That is where scarcity can turn into pricing power, and why Goldman sees durable economics for suppliers in optics, cooling, transformers and related electrical infrastructure.
Depreciation is the hidden cost inside the boom
The AI capex race is not a free lunch for buyers. Goldman has warned that organizations are making their largest-ever AI infrastructure commitments while balancing accelerated obsolescence risk against potentially market-defining competitive advantages. That tension matters because hardware can age fast even when demand keeps climbing, and depreciation can hit before the revenue case fully matures.
Goldman also says compute equipment can cost 3 to 4 times more than the physical data centers themselves. That means the most expensive part of the buildout is often not the building but the machines inside it, which makes replacement cycles and write-down risk central to the investment case. For bankers and investors, the question is no longer only how much capacity gets built, but how quickly that capital becomes outdated.
The scale of the shift is easy to miss until you compare the machines
Goldman’s historical benchmark makes the transformation concrete. The 2018 Summit supercomputer occupied 314 racks across 5,600 square feet and drew 13 megawatts, roughly the power use of 13,000 American homes. NVIDIA’s newer NVL72 rack delivers 5 times the compute of Summit in about 1/300th of the physical size, while modern systems can deliver 500 times more compute per watt than Summit.
Those numbers show why the infrastructure debate is so different from earlier cloud cycles. Yes, the hardware is getting more efficient. But the total buildout is also becoming far more power-hungry at the rack level, and that is pulling data centers into a different operating model. The old server-room logic does not hold when a single rack can demand industrial-scale power and cooling.
Demand is still outpacing the market’s ability to build
Goldman has steadily raised its view on data-center power demand as hyperscaler spending has accelerated. It first forecast 165% growth in global data-center power demand by 2030 versus 2023 levels, then revised the estimate to 175%, and later to 220% in a March 4, 2026 report. The revisions say as much about investment behavior as they do about consumption: once the largest buyers keep reinvesting, the power curve steepens quickly.
The physical market is already tight. Goldman says construction spending on data centers in the United States has tripled over the last three years, occupancy at third-party leased facilities remains near record highs across most US markets, and vacancy rates are at record lows. Planned developments now total more than 50 million square feet, about double the volume from five years ago. In other words, the buildout is large, fast and still constrained.
Where the pricing power sits
Those conditions help explain why Goldman sees a “copper wall” and other bottlenecks supporting pricing power for specific suppliers. When vacancy is low, the pipeline is full and power hookups are scarce, vendors with constrained capacity can defend margins. Optics, cooling and transformers are not generic inputs in this environment; they are chokepoints.
That is also why this is a supply-chain story as much as a demand story. The bottleneck is not just how much AI gets consumed, but whether the supporting infrastructure can be delivered quickly enough, in enough volume, and at acceptable cost. If it cannot, the winners will be the firms that own the scarce pieces of the chain rather than the ones assuming infinite scale.
The six variables that will decide the next phase
Goldman says six forces will shape the outcome: AI pervasiveness, server and compute productivity, electricity prices, policy initiatives, parts availability and people availability. That framework is useful because it keeps the debate grounded in execution, not hype. The outcome will depend on whether enough engineers, electricians, utility planners and manufacturing capacity exist to keep the buildout moving at the pace hyperscalers want.
For Goldman employees, that makes the theme especially relevant in live coverage, client conversations and career positioning. The people who can translate grid limits, supply shortages and hardware depreciation into financing structures or investment theses will be closest to the next wave of fees and opportunities. In a business where bonus season often rewards those who see around the corner, the AI infrastructure trade now depends on who understands the bottlenecks first.
What to watch next
Goldman’s broader message is that AI leadership for the rest of the decade may be decided by infrastructure choices made now. The firms that secure power, optics, cooling and electrical gear early will have a better shot at building durable capacity, while the firms that assume the market can scale endlessly may run into physics before they run out of demand.
That is the real lesson in Goldman’s latest work: AI is still a growth story, but the more important question is who gets through the grid, the copper wall and the cooling queue before the next cycle of depreciation starts.
Know something we missed? Have a correction or additional information?
Submit a Tip

