The AI hype problem
Every other startup pitch now starts with "we're using AI to..." and ends with a fundraise. Scroll through X or YouTube and you'll find breathless threads about how AI is about to replace every job, cure every disease, and achieve superintelligence by next Tuesday. Most of it is noise. And the noise is doing real damage.
The AI hype problem isn't just annoying. It's distorting how people understand the technology, inflating expectations beyond what's deliverable, and scaring people into thinking we're on the verge of something we're nowhere close to.
The startup gold rush
A huge chunk of the current AI startup ecosystem is built on hype, not substance. Y Combinator batches are overflowing with AI pitches, and the pattern is almost always the same: take a good idea, slap an AI label on it, and ride the wave to funding. The underlying product? Often just a wrapper around the OpenAI or Anthropic API with a nicer interface.
As one Reddit user in r/learnmachinelearning put it, most AI agent startups are "just wrappers around OpenAI or Anthropic APIs with a nicer UI. Zero moat, zero differentiation." The moment the underlying models get cheaper or add native features, these companies are toast.
The numbers back this up. An estimated 94.5% of YC startups never reach unicorn status. And when it comes to AI-specific ventures, the outlook is even bleaker. Joe Procopio, writing in Entrepreneurship Handbook, described how multiple AI startups came to him recently with unfixable problems, not because their AI broke, but because their entire business model was built on hype that was starting to unravel.
The problem isn't that people are building with AI. The problem is that "AI-powered" has become a magic phrase that unlocks funding regardless of whether the product solves a real problem.
The hype machine on social media
X, YouTube, Reddit, LinkedIn, all of them are overflowing with unverified claims about what AI can do. Influencers post dramatic demos that cherry-pick the best outputs. Founders share revenue screenshots with zero context. And the algorithm rewards the most extreme takes, so the loudest voices win.
This creates a deeply distorted picture. People who don't work in tech see these posts and think AI is either a miracle technology that will solve everything or an existential threat that will destroy everything. Neither is true.
MIT Technology Review called 2025 "a year of reckoning" for AI hype. After years of companies presenting every product drop as a major breakthrough, the reality started catching up. GPT-5 launched with massive expectations, Sam Altman had called it "PhD-level expert in anything," and the response was essentially: more of the same. YouTuber and AI researcher Yannic Kilcher summed it up bluntly: "The era of boundary-breaking advancements is over."
The social media hype machine doesn't just mislead consumers. It misleads founders, investors, and policymakers too. When everyone is shouting about exponential progress, it becomes harder to have honest conversations about what the technology actually does well and where it falls short.
OpenAI and the expectations gap
OpenAI deserves special attention here. Not because the company is bad, but because the gap between what's promised and what's delivered has become a case study in overhype.
OpenAI has done genuinely impressive work. ChatGPT changed how millions of people interact with technology. Their reasoning models introduced a real paradigm shift. But the rhetoric around these achievements has consistently outpaced reality.
When Sam Altman posts cryptic images of the Death Star to hype a product launch, he's not communicating technical capability. He's manufacturing excitement. And when that excitement meets a product that's iterative rather than revolutionary, the backlash is inevitable.
Nobel Prize-winning economist Daron Acemoglu put it plainly: "These models are being hyped up, and we're investing more than we should." He added that while valuable AI technologies will emerge in the next decade, "much of what we hear from the industry now is exaggeration."
Even Altman himself has acknowledged the disconnect. He told reporters in August 2025: "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes."
The circular deals don't help either. Nvidia pumps $100 billion into OpenAI, which then fills data centers with Nvidia chips. As NPR reported, these structures can artificially inflate actual demand. Hedge fund investor Michael Burry, famous for predicting the 2008 housing crisis, has called this out directly: "True end demand is ridiculously small. Almost all customers are funded by their dealers."
Fear-mongering is the other side of the coin
Overhyping AI doesn't just create unrealistic optimism. It also fuels fear. When people hear that AI is about to achieve superintelligence or replace all human workers, the natural response is anxiety. And that anxiety is largely unfounded given where the technology actually stands.
The reality? AI can't reliably write bug-free code. A study by Upwork found that AI agents powered by top models from OpenAI, Google, and Anthropic failed to complete many straightforward workplace tasks on their own. Research from MIT found that 95% of businesses that tried using AI saw zero measurable value from it.
Even Ilya Sutskever, co-founder of OpenAI and one of the architects of modern deep learning, has acknowledged the limitations. In a November 2025 interview, he pointed out that LLMs "generalize dramatically worse than people." They can learn to solve a thousand specific algebra problems, but they haven't learned how to solve any algebra problem.
We are nowhere near AGI. The models are impressive pattern matchers with genuine utility, but treating them as nascent superintelligences does everyone a disservice. It scares workers who don't need to be scared, inflates investment bubbles, and distracts from the real, practical progress being made.
What honest AI discourse looks like
None of this means AI isn't useful or important. It is. Chatbots genuinely help non-experts with everyday tasks. AI coding assistants make developers more productive. Image and video generation tools have created entirely new creative workflows.
But useful and world-ending are not the same thing. The technology is a few years old and still largely experimental. As AI researcher Andrej Karpathy has noted, chatbots are better than the average human at many things, but they're not better than an expert human at anything. That's a meaningful distinction that gets lost in the hype.
The path forward requires honesty:
- Founders should build products that solve real problems, not pitch decks designed to capitalize on a buzzword.
- Investors should fund companies with genuine technical moats, not API wrappers with nice UIs.
- Media and influencers should verify claims before amplifying them.
- The rest of us should treat AI as what it is: a powerful but imperfect tool, not a deity or a demon.
The AI hype problem is ultimately a communication problem. The technology is real. The progress is real. But the narrative around it has become so detached from reality that it's doing more harm than good. A correction isn't just overdue, it's necessary for AI to actually fulfill its potential.
References
- MIT Technology Review, "The great AI hype correction of 2025," December 2025. https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/
- NPR, "Here's why concerns about an AI bubble are bigger than ever," November 2025. https://www.npr.org/2025/11/23/nx-s1-5615410/ai-bubble-nvidia-openai-revenue-bust-data-centers
- Joe Procopio, "As AI Hype Dies Down, Startups Built on AI Hype Are Imploding," Entrepreneurship Handbook, September 2025. https://ehandbook.com/as-ai-hype-dies-down-startups-built-on-ai-hype-are-imploding-296b73a95a81
- Harvard Gazette, "Should U.S. be worried about AI bubble?" December 2025. https://news.harvard.edu/gazette/story/2025/12/should-u-s-be-worried-about-ai-bubble/
- PBS NewsHour, "What's next for AI and has its explosive growth in 2025 created a bubble?" December 2025. https://www.pbs.org/newshour/show/whats-next-for-ai-and-has-its-explosive-growth-in-2025-created-a-bubble
- LessWrong, "Generative AI is not causing YCombinator companies to grow faster," 2025. https://www.lesswrong.com/posts/hxYiwSqmvxzCXuqty/generative-ai-is-not-causing-ycombinator-companies-to-grow