A trillion dollars bought nothing
In February 2026, Goldman Sachs chief economist Jan Hatzius made a statement that should have rattled every boardroom in Silicon Valley: AI investment contributed "basically zero" to U.S. economic growth in 2025. Not a little. Not a disappointing amount. Zero. This landed in the middle of a spending frenzy that shows no sign of slowing down. Hyperscalers are projected to pour over $650 billion into AI capital expenditure in 2026 alone. Gartner forecasts worldwide AI spending will hit $2.5 trillion this year, a 44% jump from 2025. Morgan Stanley estimates nearly $3 trillion in AI-related infrastructure investment will flow through the global economy by 2028, with more than 80% of that spending still ahead. A trillion-dollar investment cycle with no measurable GDP impact. That number deserves unpacking.
The zero explained
Hatzius's claim sounds more dramatic than the underlying mechanics. The key insight is about accounting, not futility. Most of the hardware powering AI, chips from TSMC, memory from Samsung and SK Hynix, is manufactured overseas. When a U.S. company spends billions on GPUs, the investment shows up as a positive entry on the capital expenditure line, but it's offset almost entirely by a negative entry on the net-exports line. The money flows to Taiwanese and Korean GDP, not American. As Hatzius put it: "A lot of the AI investment that we're seeing in the U.S. adds to Taiwanese GDP, and it adds to Korean GDP but not really that much to U.S. GDP." Of the 2.2% U.S. GDP growth in 2025, only about 0.2 percentage points were attributable to AI investment after accounting for imports. Statistically, that rounds to noise. But there's a second, subtler problem. Even if the hardware were domestically produced, we still lack reliable methods to measure how AI usage among businesses and consumers contributes to economic output. The GDP framework was built for an economy of physical goods and billable services, not for tools that quietly make knowledge workers 30% faster at specific tasks.
The productivity paradox is back
This is not the first time a transformative technology produced a confusing gap between investment and measurable returns. Economists have a name for it: the productivity paradox. In 1987, Robert Solow famously quipped that "you can see the computer age everywhere but in the productivity statistics." Companies had been buying PCs and networking equipment for years, but economy-wide productivity gains didn't materialize until the late 1990s, roughly a decade after adoption began. The pattern with AI looks similar. A recent survey of nearly 6,000 executives across the U.S., Europe, and Australia found that while 70% of firms were actively using AI, about 80% reported no measurable impact on employment or productivity. McKinsey's 2025 global survey found that nearly nine out of ten organizations use AI regularly, but most haven't embedded it deeply enough to realize enterprise-level benefits. Yet the micro-level story is different. Goldman Sachs found that management teams who actually quantified AI-driven productivity impacts on specific tasks reported a median gain of around 30%. The problem is that these gains don't aggregate cleanly into macro data. They show up as a developer shipping code faster, a analyst producing reports in half the time, a support team handling more tickets. Useful, real, but invisible to GDP.
The dot-com parallel (and why it's incomplete)
The most obvious historical comparison is the dot-com era. Between 1995 and 2000, hundreds of billions flowed into internet infrastructure, fiber optic cables, server farms, and startups with no revenue. The NASDAQ rose 572% before collapsing 78% from its peak. Trillions in investor wealth evaporated. But here's what's easy to forget: the infrastructure survived the bust. The fiber optic cables stayed in the ground. The server farms kept running. The protocols and standards that were built during the mania became the foundation for Google, Amazon Web Services, Facebook, and the entire modern internet economy. The spending wasn't wasted, it was just early. The parallel to AI is tempting but incomplete. The dot-com bubble was largely about startups with no business models burning venture capital. The current AI spending cycle is dominated by the most profitable companies in history, Microsoft, Google, Amazon, Meta, deploying capital from actual cash flows. They're not gambling on hope. They're building infrastructure they believe will be as foundational as cloud computing. That said, the parallel holds in one important way: the lag between infrastructure investment and economic return is real, and it's uncomfortable. Goldman Sachs's own 2023 forecast projected that AI would begin having a measurable impact on U.S. GDP and labor productivity starting in 2027, not 2025. By that timeline, we're still in the infrastructure phase.
Jevons paradox and the expanding bill
There's another dynamic worth understanding. When DeepSeek released its efficient open-source model in early 2025, some analysts predicted AI spending would fall as companies could do more with less compute. Microsoft CEO Satya Nadella had a different take: "Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of." Jevons paradox, named after the 19th-century economist William Stanley Jevons, holds that when a resource becomes more efficient to use, total consumption tends to increase rather than decrease. Jevons observed this with coal in Victorian England: more efficient steam engines didn't reduce coal consumption, they made coal-powered industry viable in more contexts, driving demand higher. The same logic applies to AI compute. Cheaper inference doesn't mean smaller cloud bills. It means companies deploy AI to more tasks, more users, more edge cases. The unit cost drops, but the total bill climbs. This is why projections keep getting revised upward. Analyst estimates have consistently underestimated AI capital expenditure, and the gap between forecast and actual spending keeps widening. This creates a peculiar situation: the spending isn't irrational, but it also can't be justified by current returns. The bet is that the returns will come, and that the companies who built the infrastructure early will capture them.
Who benefits from the "wasted" narrative
It's worth asking who benefits from framing a trillion dollars of AI spending as waste. Short sellers have an obvious incentive to amplify the "zero return" story. So do incumbents whose budgets are being cannibalized by AI investments they didn't request. Regulators looking for reasons to scrutinize big tech find convenient ammunition in the gap between spending and results. But the counter-narrative also serves powerful interests. The companies doing the spending need investors to stay patient. The "this is just like the early internet" framing gives them a decade-long runway before anyone can fairly call the bet a failure. The honest answer is probably somewhere in the middle. Some of the spending is genuinely productive infrastructure that will compound in value. Some of it is AI-washing, companies rebranding existing products with an AI label to justify premium pricing. Some of it is demo-ware, impressive prototypes that never survive contact with production workloads. And some of it is defensive, companies spending not because they've found ROI, but because they're terrified of falling behind competitors who might.
The trillion isn't gone
The most important distinction in this entire debate is between "no return" and "no return yet." The trillion dollars hasn't vanished. It's sitting in data centers across Virginia, Texas, and Iowa. It's in the training runs that produced models now being integrated into enterprise software. It's in the tooling, the APIs, the fine-tuning infrastructure that's slowly making AI useful for specific, measurable tasks. Morgan Stanley's research notes that 21% of S&P 500 companies now cite AI benefits, and the adopters delivering measurable results are seeing cash flow margin expansion at roughly twice the global average. That's not nothing. It's just not enough to move the macro needle, not yet. Goldman's own projection still holds: AI could increase U.S. productivity growth by 1.5 percentage points annually, assuming widespread adoption over a 10-year period. Fewer than 20% of U.S. establishments currently use AI for any business function. The adoption curve hasn't even hit its inflection point. The trillion dollars bought nothing, if you're measuring by 2025 GDP. But GDP is a rearview mirror. It measures what already happened, not what's being built. The real question isn't whether the money was wasted. It's whether the right people are building on top of the infrastructure it paid for, and whether the returns will arrive before patience runs out.
References
- AI contributed 'basically zero' to the US economy last year, according to Goldman Sachs, Yahoo Finance, February 2026
- AI Added 'Basically Zero' to US Economic Growth Last Year, Goldman Sachs Says, Gizmodo, February 2026
- Why AI Companies May Invest More than $500 Billion in 2026, Goldman Sachs, December 2025
- Goldman finds no meaningful relationship between AI and productivity but a 30% boost for 2 specific use cases, Fortune, March 2026
- Visualising AI spending: How does it compare with history's mega projects?, Al Jazeera, February 2026
- AI Market Trends 2026: Global Investment, Risks, and Buildout, Morgan Stanley, 2026
- The State of AI: Global Survey 2025, McKinsey, 2025
- How much did AI boost the economy? Maybe zilch, some economists say, The Washington Post, February 2026
- Why the AI world is suddenly obsessed with Jevons paradox, NPR Planet Money, February 2025
- AI investment forecast to approach $200 billion globally by 2025, Goldman Sachs, August 2023
You might also enjoy