The planet can't keep up
The AI bottleneck everyone predicted was compute. Better chips, bigger clusters, smarter models. But the actual constraint turning out to be far more stubborn is physical: labor, copper, water, electricity, and the sheer amount of money needed to reshape the planet's infrastructure at this speed. McKinsey estimates that global spending on data centers could reach $7 trillion by 2030. That is not a projection of what the industry hopes to spend. It is the estimated cost of the infrastructure already being planned, with 100 to 110 gigawatts of new data center capacity in various stages of development worldwide. We solved the intelligence problem faster than anyone expected. Now we are discovering that intelligence needs a body, and that body is made of atoms.
The numbers are staggering
The scale of what is being built defies easy comparison. xAI committed over $20 billion for a single data center complex in Southaven, Mississippi, which, combined with nearby facilities, will give the company access to 2 gigawatts of computing power. Amazon raised a record $37 billion in U.S. bond markets in a single month and followed it with a $17 billion euro-denominated deal. Alphabet sold a rare 100-year bond tranche as part of a $32 billion debt package. Bank of America analysts project $1 trillion in hyperscaler-related investment-grade bond issuance through 2030. Big Tech's combined AI spending in 2026 alone is projected to exceed $635 billion, according to S&P Global. Other estimates put the figure closer to $650 billion or even $725 billion. Morgan Stanley Research forecasts that U.S. data center demand could reach 74 gigawatts by 2028, with a projected shortfall of roughly 49 gigawatts in available power access. These are not abstract figures. Each gigawatt represents real land, real power plants, real transmission lines, and real construction crews.
The resource gap
The first constraint is energy. Data centers already account for about 1.5% of global electricity demand, a figure the International Energy Agency expects to rise to roughly 3% by 2030, equivalent to the total electricity consumption of Japan. AI data centers are particularly power-hungry, with a single hyperscale AI facility consuming as much electricity as 100,000 households. But electricity is only the beginning. Copper is emerging as a critical chokepoint. The International Copper Study Group projects a 150,000 metric-ton deficit in the global refined copper market in 2026, and analysts warn we may only meet 70% of global copper demand by 2035. Data centers are massive consumers of the metal, from power distribution systems to networking cables. A strained copper supply chain could slow data center buildouts regardless of how much capital is available. Water is another bottleneck. Traditional data center cooling requires enormous volumes of it. In regions already facing water stress, new facilities trigger fierce competition with agriculture and residential use. Then there is labor. Building a gigawatt-scale data center campus is a civil engineering project on the scale of a small city. It requires electricians, pipefitters, heavy equipment operators, and specialized technicians. The talent pool is not growing anywhere near fast enough. Lead times for critical equipment already exceed 50 weeks in some categories, according to McKinsey, and that is before accounting for the workforce needed to install it.
The money problem
Even if every physical resource were available tomorrow, the financing challenge would remain formidable. The $7 trillion figure assumes capital markets stay enthusiastic about AI returns that have not yet materialized at the scale investors are betting on. The hyperscalers can tap their own cash flows to finance about half their spending, according to Morgan Stanley. The rest must come from debt markets, infrastructure funds, and institutional capital. BlackRock, Microsoft, and Global Infrastructure Partners have formed an alliance targeting $100 billion. KKR and Energy Capital Partners have a $50 billion fund aimed at building data centers and power generation side by side. This is a bet that AI will generate enough economic value to justify the largest private infrastructure buildout in human history. If it does, the returns will be extraordinary. If demand plateaus or shifts in unexpected ways, the world will be left with a lot of very expensive real estate.
Jevons paradox strikes again
There is a cruel irony at work. As AI models become more efficient, making each unit of intelligence cheaper to produce, demand does not fall. It explodes. This is Jevons paradox, the observation that when the cost of using a resource drops, total consumption tends to increase rather than decrease. William Stanley Jevons described this phenomenon in the 1860s when more efficient steam engines led not to less coal consumption but to vastly more of it. Microsoft CEO Satya Nadella invoked the concept explicitly after DeepSeek demonstrated cheaper AI training methods: "As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of." The implication is sobering. Every breakthrough in model efficiency, every reduction in inference cost, does not relieve pressure on infrastructure. It increases it. New applications become viable. Industries that could not justify the cost suddenly adopt AI. The total compute demand ratchets up, and the physical infrastructure must follow. Intelligence got cheap. Everything around it did not.
The view from small nations
For countries like Singapore, these dynamics create an acute strategic tension. Singapore has zero natural resources, limited land (just 728 square kilometers), and strict power consumption limits. Its National AI Strategy 2.0 and $27 billion in planned AI infrastructure investment signal serious ambition, but the country's ability to execute depends almost entirely on other nations building the physical layer. Singapore can invest in talent, governance frameworks, and AI applications. It can host data centers, though it recently lifted a construction moratorium and imposed strict efficiency requirements including mandatory liquid cooling and power usage effectiveness (PUE) ratios of 1.3 or lower. But it cannot mine its own copper, generate its own baseload power at scale, or train the electricians fast enough. This is not unique to Singapore. Most nations are in some version of the same position, dependent on a global supply chain that a handful of hyperscalers and resource-rich countries control. AI sovereignty, for the vast majority of the world, is a policy aspiration bumping up against physical reality.
Construction timelines are the real constraint
The common narrative about AI timelines focuses on model capabilities. When will we get AGI? When will reasoning models surpass human experts? These are interesting questions, but they may be the wrong ones. The more binding constraint is construction timelines. A gigawatt-scale data center takes years to plan, permit, and build. Transmission line upgrades can take a decade. New power plants, whether gas, nuclear, or renewable, face their own permitting and construction bottlenecks. The infrastructure needed to support the next generation of AI systems is not a software problem with a software timeline. It is a concrete-and-steel problem with a concrete-and-steel timeline. AI infrastructure is increasingly being treated as critical infrastructure, on par with power grids and telecommunications networks. The World Economic Forum has called for it to be governed as such. But critical infrastructure is, by definition, slow to build and difficult to scale quickly. The real reason AI timelines might slip is not that models are not smart enough. It is that we cannot pour concrete fast enough.
What this means
None of this is a reason for pessimism about AI itself. The technology works. The demand is real. The economic incentive to build is overwhelming. But it is a reason to take the physical world seriously. The AI revolution is not purely digital. It is an industrial transformation on a scale we have not seen since the postwar infrastructure boom, and possibly larger. It requires the same things every industrial transformation has required: raw materials, energy, skilled labor, patient capital, and time. The planet is not running out of intelligence. It is running out of the capacity to keep up with what intelligence demands.
References
- McKinsey & Company, "The $7 trillion data center build-out: How industrials can capture their share" (link)
- Reuters Breakingviews, "AI dreams crash into stark $7 trln reality," April 7, 2026 (link)
- Reuters, "Musk's xAI to invest over $20 billion in Mississippi data center," January 9, 2026 (link)
- Reuters, "Big Tech's $635 billion AI spending faces energy shock test, S&P Global says," March 31, 2026 (link)
- Morgan Stanley, "Powering AI: Energy Markets Race to Invest in AI Energy Solutions" (link)
- International Energy Agency, "AI and Energy" (link)
- International Copper Study Group, global refined copper market deficit forecast, 2026
- HPCwire, "AI Is Running Into a $7 Trillion Wall," April 8, 2026 (link)
- Data Center Dynamics, "AI Infrastructure is hitting physical limits, and efficiency is becoming critical" (link)
- NPR Planet Money, "Why the AI world is suddenly obsessed with Jevons paradox," February 4, 2025 (link)
- World Economic Forum, "It's time to start treating AI infrastructure as critical infrastructure," April 2026 (link)
- Smart Nation Singapore, "National AI Strategy" (link)
- Introl, "Singapore's $27B AI Infrastructure Boom" (link)
You might also enjoy