The amount of capping in AI
Every few months, the AI industry holds a press conference disguised as a product launch. A CEO walks on stage, says something deliberately vague about being "on the path to AGI," shows a cherry-picked demo, and walks off to a standing ovation. The stock price goes up. The timeline gets pushed back quietly a few weeks later. Rinse and repeat. The amount of capping in AI has reached a level where it's hard to tell where the marketing ends and the delusion begins. "Capping," for the uninitiated, means lying, exaggerating, making things sound bigger than they are. And the AI industry has turned it into an art form.
"Just 100 more billion and we'll reach AGI"
The AGI goalpost has been moving for years. Sam Altman said AGI would arrive by 2025. Then it was 2027. Then he started saying AGI is "not a super useful term." Funny how the definition gets fuzzy right around the time the deadline arrives. A 2025 survey by the Association for the Advancement of Artificial Intelligence found that 76% of AI researchers believe that "scaling up current AI approaches" to yield AGI is "unlikely" or "very unlikely" to succeed. The people actually building the models don't believe the pitch. But the pitch keeps getting made, because AGI isn't a technical milestone anymore. It's a fundraising narrative. OpenAI raised over $6.6 billion in late 2024 on the promise of building AGI. Nvidia pumps billions into OpenAI, which fills data centres with Nvidia chips, which inflates demand numbers, which justifies more investment. As NPR reported, these circular financing structures can artificially inflate actual demand. Hedge fund investor Michael Burry called it out directly: "True end demand is ridiculously small. Almost all customers are funded by their dealers." The money isn't chasing a breakthrough. It's chasing the story of a breakthrough.
New model, same autocomplete
Every major model release follows the same script. Announce it as a paradigm shift. Show benchmarks that look impressive in isolation. Watch the discourse cycle through hype, disappointment, and rationalization within 72 hours. GPT-5 launched with massive expectations. Altman had called it "PhD-level expert in anything." The response was essentially: more of the same. Yannic Kilcher, an AI researcher and YouTuber, put it bluntly: "The era of boundary-breaking advancements is over." MIT Technology Review called 2025 "a year of reckoning" for AI hype, noting that after years of companies presenting every product drop as a major breakthrough, reality started catching up. The improvements are real but incremental. Each new model is marginally better at the same things, not qualitatively different. It's the iPhone upgrade cycle applied to language models: a better camera sensor marketed as a revolution. TechCrunch reported in late 2024 that AI scaling laws are showing diminishing returns, forcing labs to change course. The straightforward approach of "make it bigger, make it better" is hitting a wall. The models are getting more expensive to train without proportional gains in capability. But the press releases haven't adjusted their tone. The Cal Newport piece in The New Yorker asked the question nobody in the industry wants to hear: "What if AI doesn't get much better than this?" Not that it won't improve at all, but that the trajectory of improvement might flatten into something far less dramatic than what the hype demands.
Selling a delusion
The gap between what AI companies say and what AI actually does has become a case study in overcommunication. When Altman posts cryptic images of the Death Star to hype a product launch, he's not communicating technical capability. He's manufacturing excitement for something that doesn't exist yet. Nobel Prize-winning economist Daron Acemoglu put it plainly: "These models are being hyped up, and we're investing more than we should." He added that while valuable AI technologies will emerge, "much of what we hear from the industry now is exaggeration." Even Altman himself acknowledged the disconnect, telling reporters in August 2025: "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes." The CEO of the company leading the charge is telling you the hype is overblown, and the hype continues anyway. Researchers at LSE published a paper in early 2026 examining how AI is presented in public discourse and found that four dominant claims, that AI resembles human intelligence, that AI has agency, that AI will transform the economy, and that urgent action is required, are all "problematic" from a technical standpoint. The paper argues that these claims are inaccurate and that their implications for democratic governance are being obscured by hype. The delusion isn't just that AI will do everything. It's that the current trajectory leads inevitably to that outcome. It doesn't. Progress is real, but it's not the exponential curve the pitch decks suggest.
AI will replace us (but it still can't)
This is the contradiction at the heart of the entire AI narrative. We're told AI is about to make entire professions obsolete, while simultaneously watching it struggle with tasks that a moderately competent intern could handle. A study by Upwork found that AI agents powered by top models from OpenAI, Google, and Anthropic failed to complete many straightforward workplace tasks on their own. MIT research found that 95% of businesses that tried using AI saw zero measurable value from it. An MIT study from April 2026 found that AI is advancing across the workforce more like a "rising tide" than a "crashing wave," meaning work will change broadly and gradually, not through sudden job wipeouts. Ilya Sutskever, co-founder of OpenAI and one of the architects of modern deep learning, acknowledged in a 2025 interview that LLMs "generalize dramatically worse than people." They can learn to solve a thousand specific algebra problems, but they haven't learned how to solve any algebra problem. Andrew Ng, the Google Brain founder, said AGI expectations are overhyped and that real power lies in knowing how to use current AI tools effectively, not in waiting for some mythical general intelligence. The Dallas Federal Reserve published research in February 2026 showing that AI is simultaneously aiding and replacing workers, and the distinction comes down to knowledge type. AI can replicate codified knowledge but struggles with tacit knowledge, the kind gained through experience. Wages are rising in roles that value tacit knowledge and experience, while roles heavy on codifiable tasks are seeing pressure. The technology is redistributing work upward in the experience curve, not eliminating it. Forbes reported in April 2026 that AI was the leading reason cited for cutting 60,000 jobs in March alone, responsible for 25% of all announced layoffs. But as Sam Altman himself pointed out, many companies are "AI-washing" layoffs, using AI as a scapegoat for cuts that have nothing to do with the technology. When the CEO of OpenAI is telling you companies are lying about why they're firing people, maybe listen.
The capitalism part
The AI hype machine doesn't run on delusion alone. It runs on incentives. Every stakeholder in the chain benefits from the narrative being bigger than the reality. VCs need the next trillion-dollar market. Founders need the valuation premium that comes with "AI-powered" in the pitch deck. Public companies need the stock bump that comes from mentioning AI on earnings calls. Chip makers need the demand projections that justify building factories. And media outlets need the clicks that come from "AI will change everything" headlines. The four major hyperscalers are on track to spend over $650 billion on AI investments in 2026. Gartner forecasts worldwide AI spending to hit $2.52 trillion in 2026. To break even on 2025-2026 spending alone, the industry would need approximately $1 trillion in cumulative AI revenue, a figure that remains distant. The World Economic Forum published a piece in January 2026 acknowledging the "significant disparity between the trillions of dollars invested in AI infrastructure and the technology's translation into business, economic, and social value." They pointed to MIT's finding that 95% of AI projects are failing as evidence of this gap. J.P. Morgan's analysis noted the "circular nature" of AI commitments, where suppliers, customers, and investors overlap, and drew explicit comparisons to the late-1990s telecom bubble. Their conclusion was more measured than alarming, but the structural parallels are hard to ignore. This isn't a technology problem. It's a capital allocation problem dressed up as a technology narrative. The models are genuinely useful for specific tasks. But "genuinely useful for specific tasks" doesn't justify trillion-dollar infrastructure buildouts. The gap between what the technology does and what the capital markets need it to do is where all the capping lives.
The asymptotic dream
There's a feeling that many people have but few articulate clearly: it feels like we're getting closer to something transformative, but the closer we get, the further away it seems. Like an asymptote, always approaching but never arriving. Sequoia Capital articulated a version of this as the "frontier paradox": AI is accelerating so quickly that once it reliably works, we stop calling it AI. It just becomes technology. Search engines are AI. Spam filters are AI. Autocomplete is AI. But nobody thinks of them that way anymore. The label "AI" perpetually refers to whatever is on the cutting edge, while everything that graduated to reliable utility becomes invisible infrastructure. This creates a permanent sense that AI is almost there but not quite, even as its cumulative impact on daily life continues to grow. We're surrounded by AI that works, but we only notice the AI that doesn't yet work reliably. The honest version of the AI story is less dramatic but more interesting. These are powerful pattern-matching tools with genuine utility for specific applications. They make some workers more productive. They automate some tasks well and others poorly. They are improving, but on a curve that is flattening, not steepening. They are nowhere close to general intelligence, and may never be, at least not through the current paradigm. But that story doesn't raise billions. That story doesn't move stock prices. That story doesn't generate breathless X threads or YouTube thumbnails with shocked faces. So instead, we get the capped version: AGI is coming, your job is disappearing, and the only question is whether you'll be on the right side of history. The amount of capping in AI isn't a side effect of the industry. It is the industry. The exaggeration is load-bearing. Remove it, and what's left is a useful but limited technology being deployed incrementally across the economy, which is fine, even good, but not the story anyone is selling.
References
- Association for the Advancement of Artificial Intelligence, "AAAI 2025 Panel Report on the Future of AI Research" (2025). Link
- NPR, "Here's why concerns about an AI bubble are bigger than ever" (November 2025). Link
- MIT Technology Review, "The great AI hype correction of 2025" (December 2025). Link
- TechCrunch, "AI Scaling Laws Are Showing Diminishing Returns, Forcing AI Labs to Change Course" (November 2024). Link
- Cal Newport, "What if A.I. Doesn't Get Much Better Than This?," The New Yorker (August 2025). Link
- Harvard Gazette, "Should U.S. be worried about AI bubble?" (December 2025). Link
- LSE European Politics and Policy, "Should you believe the AI hype? Probably not" (February 2026). Link
- Axios, "MIT study challenges AI job apocalypse narrative" (April 2026). Link
- Federal Reserve Bank of Dallas, "AI is simultaneously aiding and replacing workers, wage data suggest" (February 2026). Link
- Forbes, "Companies Cut 60,000 Jobs in March, And AI Is Largely to Blame" (April 2026). Link
- Gartner, "Worldwide AI Spending Will Total $2.5 Trillion in 2026" (January 2026). Link
- World Economic Forum, "Talk of an AI bubble is overblown" (January 2026). Link
- J.P. Morgan Asset Management, "Does circularity in AI deals warn of a bubble?" (2026). Link
- Sequoia Capital, "AI and the Frontier Paradox." Link
- TechPolicy.Press, "Most Researchers Do Not Believe AGI Is Imminent" (2025). Link