The problem with new AI startups
Every week there's a new AI startup announcement. A slick landing page, a waitlist, a thread on X with thousands of likes. The founders talk about "revolutionizing" something. Investors pile in. The hype machine spins up. And then, quietly, nothing happens. The problem with new AI startups isn't that AI is useless. It's that most of these companies aren't building anything real, and the noise around them is making everyone exhausted.
The wrapper epidemic
Most new AI startups follow the same playbook: take an existing large language model, wrap a UI around it, charge a subscription fee, and call it innovation. Srinivas Rao has called this the "wrapper problem." These companies don't own the intelligence layer. They rent it. One writer discovered that a $60/month podcast post-production tool could be replicated by calling the OpenAI API directly for under $4. There was no proprietary system, no unique infrastructure. Just a few hardcoded prompts stapled to a clean interface. If OpenAI changes its pricing, updates its API, or builds the same feature natively, these startups evaporate overnight. That's not a business model. That's a borrowed brain with a billing page.
The hype-to-reality gap
The numbers tell a stark story. A 2025 MIT study found that 95% of organizations saw zero return on their generative AI investments, despite enterprise spending of $30 to $40 billion. Most companies are stuck in what researchers call "pilot purgatory," running experiments that never touch the bottom line. Meanwhile, a RAND study found that roughly 80% of all AI projects fail outright, often because organizations misunderstand what problem they're actually trying to solve. They start with the technology and work backwards, instead of starting with a real user need. Yet funding keeps flowing. AI startups raised approximately $130 billion in 2025, and startups with AI features raised 83% more than non-AI startups. The gap between investment and actual results is widening, not shrinking.
The social media amplification problem
This is where things get especially toxic. On platforms like X, a cottage industry has emerged around AI hype. Founders, influencers, and self-proclaimed "AI thought leaders" post breathless threads about tools that barely work, capabilities that don't exist yet, and futures that may never arrive. The incentive structure is broken. Posting "this AI tool changed my life" gets engagement. Posting "I tested 20 AI tools and most were mediocre" does not. So the feed becomes a highlight reel of exaggerated claims, and anyone who pushes back gets drowned out. This creates a vicious cycle. New founders see what gets attention, so they optimize for hype over substance. Investors see the buzz and assume there's traction. Users sign up, get disappointed, and become more skeptical of the next thing. The signal-to-noise ratio collapses. As one LinkedIn post on AI hype burnout put it, the noise is creating three camps: enthusiasts who get dismissed as snake oil sellers, builders who can't find reliable information, and skeptics who write off the whole space entirely. Nobody wins.
AI fatigue is real
The backlash is already here. Reddit threads and Hacker News posts with titles like "I am tired of AI hype" and "Anyone else tired of AI hype?" are pulling in hundreds of comments from developers and professionals who are simply worn out. AI fatigue, as some have started calling it, is the mental and emotional exhaustion from overwhelming exposure to AI marketing, AI announcements, and the constant pressure to adopt. It shows up as skepticism, disengagement, and burnout. Every other product now claims to have an "AI Assistant" that will "revolutionize" some workflow, and the word has started to mean nothing. Google's own CEO, Sundar Pichai, acknowledged in late 2025 that there were "elements of irrationality" in the current AI boom. The World Economic Forum suggested that if 2025 was the year of AI hype, 2026 might be the year of AI reckoning.
What would actually be different
The problem isn't AI as a technology. AI is genuinely useful in specific, well-defined contexts. The problem is an ecosystem that rewards noise over substance, wrappers over infrastructure, and fundraising over profitability. What would a healthier AI startup landscape look like?
- Start with a real problem. Not "how can we use AI?" but "what painful problem exists, and can AI help solve it better than alternatives?" As Steve Jobs once said, you have to start with the customer experience and work backwards to the technology.
- Own something meaningful. Proprietary data, unique infrastructure, deep domain expertise. Something that doesn't disappear if your API provider changes terms.
- Be honest about limitations. Users respect companies that tell them what a product can't do. The bar is low right now, and honesty is a competitive advantage.
- Measure real outcomes. Not sign-ups, not waitlist numbers, not social media impressions. Revenue, retention, actual problems solved.
The dot-com parallel
We've seen this movie before. In the late 1990s, adding ".com" to a company name was enough to attract investors. Startups with no business model bought Super Bowl ads. People became paper millionaires overnight. By 2001, the buildings along Highway 101 that once displayed flashy startup billboards stood empty. The AI boom has eerie parallels. Trillion-dollar valuations. Companies spending more on data centers than was spent on the Manhattan Project. A belief that this time, the rules of business don't apply. The technology survived the dot-com crash, and real businesses eventually emerged from the wreckage. The same will likely happen with AI. But the current wave of startups, the ones optimizing for hype over value, the ones renting intelligence and selling markup, most of them won't be around to see it. The people posting on X know this too. They just hope to cash out before the music stops.
References
- Srinivas Rao, "99% of AI Startups Will Be Dead by 2026, Here's Why" (Medium): https://skooloflife.medium.com/99-of-ai-startups-will-be-dead-by-2026-heres-why-bfc974edd968
- MIT NANDA / Axios, "AI investment led to zero returns for 95% of companies" (August 2025): https://www.axios.com/2025/08/21/ai-wall-street-big-tech
- RAND Corporation, AI project failure rates: https://www.hcamag.com/au/specialisation/hr-technology/over-promise-and-underdeliver-why-so-many-ai-projects-fail/534910
- Harvard Business Review, "Beware the AI Experimentation Trap" (August 2025): https://hbr.org/2025/08/beware-the-ai-experimentation-trap
- Wipfli, "AI hype is costing startups funding": https://www.wipfli.com/insights/articles/ai-hype-is-costing-startups-funding-whats-the-fix
- BBC, "Google boss says trillion-dollar AI investment boom has 'elements of irrationality'" (November 2025): https://www.bbc.com/news/articles/cwy7vrd8k4eo
- World Economic Forum, "AI paradoxes: Why AI's future isn't straightforward" (December 2025): https://www.weforum.org/stories/2025/12/ai-paradoxes-in-2026/
- Mark Surman / The Guardian, "The AI bubble will pop. It's up to us to replace it responsibly" (January 2026): https://www.theguardian.com/commentisfree/2026/jan/30/ai-bubble-mozilla
- MIT Technology Review, "What even is the AI bubble?" (December 2025): https://www.technologyreview.com/2025/12/15/1129183/what-even-is-the-ai-bubble/
- ISACA, "The Reality of AI: Oversold and Underdelivered" (2025): https://www.isaca.org/resources/news-and-trends/industry-news/2025/the-reality-of-ai-oversold-and-underdelivered
- Luca Lanziani, "AI Hype Burnout" (LinkedIn): https://www.linkedin.com/pulse/ai-hype-burnout-luca-lanziani-hce9f
- Geoffroy de Lestrange, "What to do against AI Fatigue in your company?" (Medium): https://geoffroydelestrange.medium.com/what-to-do-against-ai-fatigue-in-your-company-7397eebe38c6