Why do people hate AI?
AI is everywhere. It's in your fridge, your mouse, your toothbrush, your search results, your email drafts, your coworker's slide decks, and your kid's homework. It has infiltrated nearly every product category and every corner of the internet in an astonishingly short amount of time.
And people are furious about it.
An NBC News poll from early 2026 found that only 26% of Americans have a positive view of AI, while 46% view it negatively. A Pew Research survey from 2025 showed Americans are far more concerned than excited about the increased use of AI in daily life. The backlash is loud, it's growing, and it's coming from all directions.
But why? AI is genuinely good at some things. It can write code, summarize documents, analyze data, draft emails, and accelerate knowledge work in meaningful ways. So what's driving the hatred?
I think it comes down to several overlapping frustrations, and understanding them matters if we want to build technology that actually serves people.
It's being shoved into everything
The most visceral complaint is simple: AI is being jammed into products where nobody asked for it. A mouse with AI features. A refrigerator with a chatbot. Toothbrushes, pillows, water bottles. The "AI-powered" label has become the new "blockchain-enabled," a marketing badge slapped onto things to justify a price hike or generate buzz.
This isn't innovation. It's noise. When companies add AI to products that worked perfectly fine without it, consumers don't feel excited. They feel annoyed. It signals that the company is chasing a trend rather than solving a real problem.
CNN reported in late 2025 that a backlash against AI was taking root specifically because of this kind of "slop," the AI-generated filler creeping into slide decks, social media feeds, news outlets, and even real estate listings. People can feel when something has been automated for the sake of automation, and they resent it.
The hype outpaces the reality
AI is in a bubble. That doesn't mean the technology is useless, but the gap between what's promised and what's delivered is enormous. Companies are spending billions on AI infrastructure while the actual consumer-facing products often feel half-baked, unreliable, or just unnecessary.
Sam Altman himself admitted in early 2026 that AI was spreading more slowly than he had expected. And yet the hype machine keeps running. Every earnings call, every product launch, every keynote is saturated with AI promises. People can smell the disconnect.
As one tech commentator put it, there are really only three reasonable positions on generative AI right now: it's fundamentally unsustainable, there will be some future breakthrough that justifies the hype, or it has a narrow set of genuinely valuable use cases but everything else is just marketing. Most people outside the tech industry land somewhere in the third camp.
Jobs are disappearing before AI even works
One of the deepest sources of anger is the job market. A Harvard Business Review survey of over 1,000 global executives in late 2025 found that companies are laying off workers because of AI's potential, not its actual performance. Hiring is slowing and positions are being eliminated in anticipation of what AI might do, even though many organizations haven't seen real productivity gains yet.
This is a uniquely cruel dynamic. People aren't losing their jobs to a machine that's better than them. They're losing their jobs to a bet that a machine will eventually be better than them. The Duolingo backlash was a perfect example: when news broke that the company was shifting to become "AI-first" and replacing contractors, public perception of the brand cratered almost overnight.
For creative professionals especially, the threat feels existential. Artists, writers, illustrators, and designers are watching their work get scraped into training datasets without consent, then seeing AI-generated outputs compete directly with them for contracts and attention.
The subscription creep
AI features are increasingly gated behind subscriptions or used to justify price increases. Cloud storage, productivity tools, email clients, design software, everything is getting more expensive because of "AI enhancements" that many users didn't request and don't want.
This creates a lose-lose dynamic: you pay more for features you don't need, or you downgrade and lose access to tools that worked fine before AI was bolted on. When the value proposition doesn't hold up, people feel like they're being squeezed.
The energy and resource problem
Training and running large AI models is extraordinarily resource-intensive. Data centers consume massive amounts of electricity and water, and the environmental cost of AI is becoming harder to ignore. While some industry leaders have pushed back on the numbers, the broader point stands: scaling AI at the current pace has real ecological consequences.
For a public already anxious about climate change and energy costs, the idea that significant resources are being diverted to power chatbots and image generators feels deeply misaligned with what matters.
Fake everything
AI has supercharged the creation of fake content. Deepfakes, synthetic text, AI-generated images passed off as real, bot accounts flooding social media. The information environment was already polluted before generative AI, and now it's getting worse at an accelerating rate.
A Pew survey found that Americans feel strongly about being able to tell whether content was made by AI or a human, yet most don't trust their own ability to spot the difference. That's a disorienting place to be. When you can't trust what you see, read, or hear online, the technology enabling that erosion becomes an easy target for resentment.
Notre Dame researchers found that social media platforms aren't doing enough to stop harmful AI bots, successfully launching test bots on every major platform they studied. The platforms are racing to integrate AI while struggling to contain the damage AI is already doing on their own services.
It might be making us dumber
There's a growing concern that relying on AI is eroding our cognitive abilities. An MIT Media Lab study reported that excessive reliance on AI-driven solutions may contribute to "cognitive atrophy" and a shrinking of critical thinking abilities. A Boston Consulting Group study found that workers constantly bouncing between multiple AI tools reported more decision fatigue and more errors.
In education, the picture is equally troubling. More than half the students in one university philosophy course used AI to write their final exams, producing confident-sounding nonsense about topics that had nothing to do with the actual course material.
When ChatGPT was asked whether AI can make us dumber or smarter, even it hedged: "It depends on how we engage with it: as a crutch or a tool for growth." That's a fair answer, but the early signs suggest most people are reaching for the crutch.
So what's really going on?
I think the hatred isn't about AI itself. It's about how AI is being deployed, who benefits, and who bears the cost.
People don't hate the idea of machines helping with tedious work. They hate being forced to pay more for worse products. They hate watching jobs disappear for speculative reasons. They hate the flood of synthetic garbage polluting the internet. They hate feeling like the technology is being imposed on them rather than offered to them.
The backlash is a signal. It's telling the tech industry that speed and scale aren't the same as value. That cramming AI into every product isn't innovation, it's laziness. That people want tools that respect their intelligence, their wallets, and their attention.
AI will survive this backlash. It's too useful at its core to disappear. But the companies and products that earn trust will be the ones that treat AI as a tool for genuine problems, not a marketing gimmick for everything.
The question isn't whether AI is good or bad. It's whether the people building it are listening.
References
- NBC News poll on AI sentiment (March 2026), via Fortune: https://fortune.com/2026/03/09/ai-opinion-poll-democrats-iran-war-president-donald-trump/
- Pew Research Center, "How Americans View AI and Its Impact on People and Society" (September 2025): https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/
- CNN Business, "Why 2026 could be the year of anti-AI marketing" (December 2025): https://www.cnn.com/2025/12/16/business/anti-ai-backlash-nightcap
- The New York Times, "People Loved the Dot-Com Boom. The A.I. Boom, Not So Much." (February 2026): https://www.nytimes.com/2026/02/21/technology/ai-boom-backlash.html
- Harvard Business Review, "Companies Are Laying Off Workers Because of AI's Potential, Not Its Performance" (January 2026): https://hbr.org/2026/01/companies-are-laying-off-workers-because-of-ais-potential-not-its-performance
- WIRED, "The AI Backlash Keeps Growing Stronger": https://www.wired.com/story/generative-ai-backlash/
- Harvard Gazette, "Is AI dulling our minds?" (November 2025): https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/
- CBS News, "Is AI productivity prompting burnout? Study finds new pattern of 'AI brain fry'": https://www.cbsnews.com/news/is-ai-productivity-prompting-burnout-study-finds-new-pattern-of-ai-brain-fry/
- Notre Dame News, "Social media platforms aren't doing enough to stop harmful AI bots" (October 2024): https://news.nd.edu/news/social-media-platforms-arent-doing-enough-to-stop-harmful-ai-bots-research-finds/
- The New York Times, "Why Even Basic A.I. Use Is So Bad for Students" (October 2025): https://www.nytimes.com/2025/10/29/opinion/ai-students-thinking-school-reading.html