The $2.5 trillion AI spending
Gartner forecasts $2.52 trillion in worldwide AI spending for 2026, a 44% jump from the year before. That is not a typo. In a single year, the world will pour more money into artificial intelligence than the GDP of most nations. The number demands a closer look, not at whether AI spending is big, but at where the money actually flows, who captures the value, and whether the logic holding it all together is as sound as it appears.
Where the money goes
Gartner's breakdown reveals a spending profile that is heavily tilted toward infrastructure. Of the $2.52 trillion total, $1.37 trillion, roughly 54%, lands in AI infrastructure: servers, networking, storage, and the data centers that house them. AI services (consulting, integration, managed services) account for another $589 billion. AI software takes $452 billion. Everything else, models, platforms, cybersecurity, data tooling, fills in the remaining sliver. The infrastructure dominance is striking. Spending on AI-optimized servers alone is forecast to grow 49% in 2026, driven by the buildout of GPU clusters and AI-ready data centers. This is not companies buying AI to use it. This is companies building the substrate so that AI can be used, at scale, later. Within infrastructure, chips are the single largest cost center. McKinsey estimates that roughly 60% of data center spending goes to chips and computing hardware. That means TSMC, the foundry manufacturing GPUs for Nvidia, Broadcom, AMD, and others, sits at the very heart of the spending wave. Nvidia's position is even more concentrated: Jensen Huang has estimated $3 to $4 trillion in total AI infrastructure spending by the end of the decade, and Nvidia's GPUs remain the default choice for training and inference workloads.
The hyperscaler bet
The four largest cloud providers, Microsoft, Alphabet, Amazon, and Meta, are on track to spend upward of $650 billion on AI investments in 2026, according to Bridgewater Associates. That is up from roughly $410 billion in 2025 and $230 billion in 2024. Wall Street has consistently underestimated these numbers: Goldman Sachs noted that consensus forecasts predicted 19% growth in hyperscaler capex for both 2024 and 2025, while actual growth came in at 54% and 64% respectively. These companies are not spending out of charity. They are racing to lock in customers on their cloud platforms, where AI workloads create long-term switching costs. Microsoft bundles Copilot into its Office and Azure ecosystem. Google embeds Gemini across Workspace and Cloud. Amazon integrates Bedrock into AWS. The playbook is familiar from the cloud wars of the 2010s: subsidize infrastructure now, harvest margins later. But Bridgewater's Greg Jensen has cautioned that the AI boom has entered a "more dangerous phase," marked by exponentially rising investments in physical infrastructure and growing reliance on outside capital. The gap between investment and revenue is widening, not narrowing.
Jevons paradox, or why cheaper AI might mean more spending
In 1865, economist William Stanley Jevons observed something counterintuitive about coal. As steam engines became more efficient, consuming less coal per unit of work, total coal consumption increased. Cheaper energy per task meant more tasks. Efficiency did not reduce demand. It unleashed it. The AI industry is leaning hard on this idea. When DeepSeek demonstrated that capable models could be trained at a fraction of the cost of frontier labs, the stock market panicked. But Microsoft CEO Satya Nadella responded by invoking Jevons: "As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of." The logic is seductive. If inference costs drop 75% in a year (which they roughly have for many model providers), then companies can afford to embed AI into workflows they never would have considered before. Customer support, code review, document summarization, real-time translation, all become economically viable at scale. One prompt leads to another. Token costs crater, but total token consumption explodes. There is real evidence for this pattern. Ramp's analysis of customer spending found that AI infrastructure spending increases proportionally with spending on closed-source models, suggesting the two are complements, not substitutes. Companies are not choosing between building their own and buying from OpenAI. They are doing both. But the Jevons framing has limits. The original paradox applied to a resource (coal) powering an economy in the early stages of industrialization, where latent demand was enormous and new applications were being invented constantly. Modern economists generally find that rebound effects in mature markets tend to be modest, not paradoxical. The question for AI is whether we are closer to 1865 England or to a mature energy market. The honest answer is that nobody knows yet.
Who actually captures the value
Follow the money upstream and you find a surprisingly narrow set of winners. Chip designers and foundries. Nvidia, Broadcom, and AMD design the GPUs and accelerators. TSMC manufactures them. ASML builds the lithography machines that make TSMC's fabrication possible. This supply chain is concentrated, high-margin, and capacity-constrained. Every dollar of hyperscaler capex flows through it. Hyperscalers themselves. Microsoft, Google, Amazon, and Meta are both the biggest spenders and among the biggest beneficiaries. They are building infrastructure that they rent back to enterprises at markup. The cloud margin structure means that even if individual AI workloads are low-margin today, the platform lock-in creates durable revenue streams. Services and consulting firms. The $589 billion AI services category reflects the reality that most enterprises cannot deploy AI on their own. They need help with integration, change management, data pipelines, and custom model development. This is where firms like Accenture, Deloitte, and a growing ecosystem of AI-native consultancies are positioning themselves. Notably absent from the winner's circle, at least so far, are the enterprises doing the spending. Gartner itself places AI in the "Trough of Disillusionment" for 2026, noting that "the improved predictability of ROI must occur before AI can truly be scaled up by the enterprise." Most enterprise AI deployments are still in pilot or early production stages. The spending is real. The returns are largely still theoretical. The Atlantic reported that Nvidia has struck more than 50 deals in which AI companies effectively pay for chips by handing over equity, a share of future profits. This circular financing pattern, where the chip supplier funds its own customers, echoes dynamics from previous technology bubbles. It does not necessarily mean the bubble will burst, but it does mean the current spending is partially self-referential.
The infrastructure boom pattern
This is not the first time the technology industry has made a massive infrastructure bet ahead of proven demand. In the late 1990s, telecom companies laid enormous amounts of fiber-optic cable during the dot-com boom. When the bubble burst, many of those companies went bankrupt. But the fiber remained in the ground, and it eventually powered the broadband internet that enabled the next generation of technology companies. In the early 2010s, cloud computing followed a similar arc. Amazon, Microsoft, and Google invested billions in data centers before most enterprises were ready to migrate. It took years for cloud revenues to justify the capital outlay. But the companies that built the infrastructure captured the platform layer, and they have dominated enterprise technology ever since. The AI infrastructure buildout of 2025 and 2026 rhymes with both of these episodes. The capital is flowing before the demand is fully proven. The companies building the infrastructure are, by and large, the same ones that won the cloud wars. And the bet is the same: whoever controls the compute layer will control the value chain. The key question is whether the demand curve catches up before patience (and capital) runs out. Bridgewater's warning about a "more dangerous phase" reflects the concern that exponentially rising fixed costs and growing reliance on outside financing create fragility, even if the long-term thesis is correct.
The FOMO factor
Not all of the $2.5 trillion represents rational investment. Gartner's own analysts acknowledge this tension. John-David Lovelock noted that "organizations with greater experiential maturity and self-awareness are increasingly prioritizing proven outcomes over speculative potential," a diplomatic way of saying that many organizations are spending on AI because they feel they have to, not because they have a clear plan for ROI. This is the FOMO-driven budget problem. Board-level pressure to "have an AI strategy" is real. The fear of being left behind is acute. And technology vendors are more than happy to sell into that fear. When AI is in the Trough of Disillusionment, Gartner notes, it will most often be sold to enterprises by incumbent software providers rather than bought as part of a new project with clear objectives. In other words, a significant portion of enterprise AI spending is being bundled into existing contracts, not pulled by demand. The historical pattern suggests that some of this spending will be clawed back. After the initial enthusiasm fades and pilot projects fail to deliver measurable returns, CFOs will scrutinize AI line items more carefully. The 18-month question, whether today's AI budgets survive their first renewal cycle, is a real one.
What to watch
Three signals will determine whether the $2.5 trillion thesis holds up. Enterprise ROI evidence. The spending can only sustain itself if companies start showing concrete returns. Look for case studies that move beyond "we saved X hours" and into "we grew revenue by Y%" or "we reduced costs by Z% at scale." Until those stories become common, the spending rests on faith. Inference cost trajectories. If inference costs continue to drop rapidly, the Jevons argument gets stronger. Cheaper AI means more use cases become viable, which means more spending on infrastructure to support those use cases. But if cost declines plateau, the demand explosion may stall. Capital market tolerance. Much of the current infrastructure buildout is financed by retained earnings from cloud and advertising businesses, but the marginal dollar is increasingly coming from debt and equity markets. If interest rates rise or investor sentiment shifts, the capital pipeline could tighten quickly. Nvidia's equity-for-chips deals are an early signal of how creative the financing is becoming. The $2.5 trillion AI spending figure is not just a number. It is a statement of collective belief about the future of computing. Some of that belief will be vindicated. Some of it will be written off. The challenge, as always, is figuring out which is which before the cycle turns.
References
- Gartner, "Worldwide AI Spending Will Total $2.5 Trillion in 2026," January 2026. Link
- Bridgewater Associates via Reuters, "Big Tech to invest about $650 billion in AI in 2026," February 2026. Link
- Goldman Sachs, hyperscaler capex forecast analysis, cited in The Motley Fool, February 2026. Link
- McKinsey, data center spending allocation estimates, cited in The Motley Fool, February 2026. Link
- Ramp, "Inside the rapidly growing, and surprisingly narrow, AI infrastructure market," 2026. Link
- NPR Planet Money, "Why the AI world is suddenly obsessed with Jevons paradox," February 2025. Link
- William Stanley Jevons, The Coal Question, 1865.
- The Atlantic, "Something Ominous Is Happening in the AI Economy," December 2025. Link
- TechCrunch, "The billion-dollar infrastructure deals powering the AI boom," February 2026. Link
- Gartner, "Worldwide IT Spending to Grow 10.8% in 2026, Totaling $6.15 Trillion," February 2026. Link
You might also enjoy