What if AI can’t replace us?
They keep selling us the same story. Every keynote, every product launch, every breathless blog post: AI is coming for your job. It will code better than you, write better than you, think better than you. Subscribe now for $20 a month, then $100, then whatever they decide to charge next. The pitch hasn't changed since ChatGPT first went viral in late 2022. But the longer we wait, the more a different question starts to demand attention. What if AI can't actually replace us?
The story they keep telling
The narrative is always the same. A new model drops, benchmarks go up, and a wave of commentary follows: this is the one. This is the model that makes human workers obsolete. Goldman Sachs estimates 300 million jobs globally are "exposed" to automation by AI. PwC says up to 30% of jobs could be automatable by the mid-2030s. Headlines scream about the end of entry-level work. But exposure is not replacement. And "could be automatable" is doing an enormous amount of heavy lifting in those sentences. Boston Consulting Group put it plainly in 2026: AI will reshape more jobs than it replaces. Their model shows 50% to 55% of US jobs will change because of AI, but most of those roles will still exist, just differently. Anthropic's own labor market research found no systematic increase in unemployment for workers in the most AI-exposed occupations since late 2022. The company building the models is telling us the models aren't eliminating jobs. So why does the story never change?
Follow the money
Because replacement is a better sales pitch than augmentation. If AI merely helps you do your job a little faster, that's worth maybe $20 a month to you. But if AI replaces entire departments, that's worth restructuring your whole company around it. That's worth enterprise contracts, infrastructure buildouts, and billions in compute spending. The problem is that the economics are starting to crack. An Nvidia executive admitted in April 2026 that "the cost of compute is far beyond the costs of the employees" it's supposedly replacing. OpenAI is projecting its ChatGPT Plus subscriptions will drop by 80%, from 44 million in 2025 to 9 million in 2026. Their head of ChatGPT suggested that "unlimited" plans may not last, saying "there's no world in which pricing doesn't significantly evolve." Anthropic has been hiking enterprise prices, with some customers facing costs that could triple. They keep raising prices, dropping limits, expanding features that burn through more tokens, and asking us to spend more. The industry even has a word for it now: tokenmaxxing.
The tokenmaxxing trap
Tokenmaxxing, the practice of consuming as many AI tokens as possible, has become the latest corporate performance metric. At Meta, tens of thousands of employees were ranked on leaderboards by how many tokens they burned through. Jensen Huang said on a podcast that if his $500K engineer wasn't consuming at least $250K worth of tokens, he'd be "deeply alarmed." But as data scientist Cassie Kozyrkov pointed out, measuring AI skill by token usage is like measuring a writer by keystrokes or a surgeon by incisions. You'll get more of it. You won't necessarily get better results. The Register called it out directly: "Tokenmaxxing isn't an AI strategy." TechCrunch reported that while engineering managers see code acceptance rates of 80% to 90% for AI-generated code, the real-world acceptance rate after revisions drops to between 10% and 30%. Developers are generating more code and then spending more time fixing it. This is the cycle. Use more AI. Spend more money. Generate more output. Clean up the mess. Repeat. And at every step, someone is selling you the next subscription tier.
The reliability problem hasn't gone away
The fundamental issue with replacing humans isn't capability on benchmarks. It's reliability in the real world. AI still hallucinates. It still makes confident mistakes that would get a human fired. A Forbes survey found that 43% of executives and 36% of business owners are specifically worried about AI generating inaccurate outputs. ISACA's 2025 industry analysis called AI "oversold and underdelivered," noting that organizations are discovering the road to meaningful AI adoption is far more complex and costly than anticipated. Another survey found that 62% of people say AI is overhyped. This matters because most real-world work has consequences. When an AI gets a customer support answer wrong, the company brings humans back. When an AI writes code that passes a benchmark but fails in production, a developer has to debug it. When an AI drafts a legal document with hallucinated case law, a lawyer has to catch it. Reliability, liability, and trust aren't benchmarks you can optimize away. They're the reasons humans are still in the loop, and they might be the reasons humans stay in the loop.
The jobs that were supposed to vanish
Forbes reported that AI was cited in 25% of layoffs in 2026, up from 5% in 2025. That sounds alarming until you read the next line: those AI-attributed layoffs account for only about 5% of total US layoffs. And as the BBC noted, CEOs are under pressure to both cut costs and justify AI spending, so attributing layoffs to AI serves a convenient dual purpose. The Atlantic profiled CEOs at the 2025 Allen & Co. conference who admitted they felt trapped. Wall Street expects them to replace labor with AI. But they know the actual technology isn't ready for that. If they all ordered mass eliminations, the consequences would be enormous, for their workforce, for the country, and for their own reputations. So what's actually happening? Companies are reorganizing, not eliminating. Surveys show about half of firms move affected workers into different roles. Many hire new people to work alongside the AI. Customer support is the closest thing to a success story for full replacement, and even there, companies often bring humans back when complexity rises or customer satisfaction drops.
The question we should be asking
I think we've been asking the wrong question for years. "When will AI replace us?" assumes a destination. It puts us on a timeline someone else defined, waiting for our own obsolescence. The better question is: what if it can't? Not "what if AI is useless," because it isn't. It's genuinely good at specific things. But what if the gap between "impressive demo" and "reliable replacement for a thinking human" is not a gap that closes with more compute and bigger models? What if it's a gap that gets more expensive to close the further you push? The scaling laws are showing diminishing returns. Models from 2024 and 2025 show smaller performance jumps despite massive token expansions. The low-hanging fruit has been picked. And the problems that remain, the judgment calls, the context sensitivity, the accountability, those are exactly the problems that make humans valuable.
What this means for us
If AI can't replace us, then the entire value proposition shifts. AI stops being a threat and starts being what it probably always should have been: a tool. A very expensive, very impressive, occasionally unreliable tool. That doesn't make a great keynote. It doesn't justify $200 monthly subscriptions or trillion-dollar valuations. But it might be the truth. The longer we wait for the replacement that was promised, the more likely it is that it was never coming. Not because the technology isn't advancing, but because the thing they're trying to replace, human judgment in context, might be harder than anyone wanted to admit. They sold us a story. It's been years. The story hasn't changed. But the evidence has. Maybe it's time we stop subscribing.
References
- AI Will Reshape More Jobs Than It Replaces, Boston Consulting Group, 2026
- Labor Market Impacts of AI: A New Measure and Early Evidence, Anthropic Research
- How Will AI Affect the US Labor Market?, Goldman Sachs, March 2026
- Nvidia Executive: The Cost of AI Tools Is 'Far Beyond' the Cost of Human Workers, Fortune, April 2026
- OpenAI Projects ChatGPT Plus Subscriptions to Drop by 80%, Where's Your Ed At, April 2026
- OpenAI Is Rethinking ChatGPT Pricing, Business Insider, March 2026
- Tokenmaxxing Isn't an AI Strategy, The Register, April 2026
- 'Tokenmaxxing' Is Making Developers Less Productive Than They Think, TechCrunch, April 2026
- Tokenmaxxing: A Misguided Measure of AI Skill, Cassie Kozyrkov, LinkedIn
- The Reality of AI: Oversold and Underdelivered, ISACA, 2025
- AI's Promise Vs Reality And Why 62% Say It Is Overhyped, Forbes, October 2025
- The New AI Career Divide Is Already Starting to Show, Forbes, April 2026
- America Isn't Ready for What AI Will Do to Jobs, The Atlantic, March 2026
- The Human Advantage Strikes Back: Skills AI Can't Replace in 2026, Forbes, February 2026