Slop comes from humans, not AI
"Slop" was named Merriam-Webster's word of the year for 2025. Its new definition: "digital content of low quality that is produced usually in quantity by means of artificial intelligence." The entire internet seems to agree that AI is flooding our feeds with garbage, and that we need to push back against the machines. But here's the thing. AI doesn't want anything. It doesn't decide to publish. It doesn't choose quantity over quality. It doesn't hit "post" without reading what it wrote. Every single piece of slop that reaches your screen got there because a human decided it was good enough to ship. Slop is a human problem. It always has been.
We had slop long before we had AI
Content farms existed a full decade before ChatGPT. Sites like Demand Media and Associated Content churned out thousands of low-quality articles per day, reverse-engineered from search queries and designed to rank, not to inform. The business model was simple: find what people are searching for, produce the cheapest possible answer, and collect the ad revenue. Clickbait headlines, listicles written by freelancers paid a few dollars per piece, engagement-bait videos on Facebook, algorithmically optimized YouTube thumbnails with exaggerated facial expressions. None of this required generative AI. It just required humans who prioritized volume over value. As one writer put it, "AI slop existed long before AI did. It's all the boring, committee-driven copycat filler that has been clogging up the pop charts forever." The pattern is the same whether a human or a machine produces the output: minimize effort, maximize reach, ignore quality.
The tool isn't the problem, the operator is
When someone copies and pastes a raw ChatGPT response into a blog post without reading it, that's not an AI failure. That's a human failure. The person chose not to edit. They chose not to think about whether the output was accurate, interesting, or worth anyone's time. They chose to skip the part of the process that actually matters. Research from the Wharton School found that recruiters who used high-quality AI assistance actually became lazier and less skilled in their own judgment. They missed brilliant applicants and made worse decisions than recruiters using no AI at all. The AI didn't cause the bad outcomes. The humans responded to a powerful tool by switching off their brains. This is the real dynamic behind most AI slop. The technology lowers the cost of production so dramatically that it becomes tempting to skip the curation step entirely. But curation was always the hard part, and the important part. A camera doesn't make you a photographer. A word processor doesn't make you a writer. And an LLM doesn't make you a thinker.
Slop is the absence of taste
The deeper issue isn't about AI at all. It's about taste, and the willingness to exercise it. Taste is the ability to look at something you've made and honestly assess whether it's good. It's the willingness to throw away a draft that doesn't work, even though producing it was easy. It's knowing the difference between "this exists" and "this is worth someone's attention." As Nitin Nohria wrote in The Atlantic, a great leader once told him: "I may not be a creative genius, but I've come to trust my taste." In an age where AI makes polished content cheap and accessible, that kind of judgment becomes the only real differentiator. Anyone can generate text. The question is whether you can tell if it's any good. Humans who lack taste produced slop with typewriters. They produced slop with WordPress. They produced slop with content farms. Now they produce slop with ChatGPT. The constant isn't the tool. It's the person who doesn't care enough to curate what they put into the world.
The label "AI slop" lets humans off the hook
Calling something "AI slop" is convenient because it externalizes the blame. It suggests the problem is technological, that if we could just detect and filter AI-generated content, quality would return. But this misses the point entirely. A Reddit post captures this well: "What people react to is not artificiality, but the absence of effort. People have always recognized this problem in human writing: unclear premises, undeveloped ideas, conclusions that do not follow from anything. AI did not invent this. It simply removes the social friction that used to hide it." Before AI, producing slop required at least a minimum amount of typing, of sitting in a chair and stringing words together. That friction acted as a natural filter, not because typing guaranteed quality, but because it imposed a cost. AI removed that cost, and suddenly the people who were always willing to ship low-effort work can do so at unprecedented scale. But the intent was always there. The tool just made it easier to act on.
What actually matters going forward
If slop is a human problem, then the solutions are human too. First, take responsibility for what you publish. If you use AI to help you write, that's fine. But you are still the editor. You are still the person whose name is on it. Read what you're about to publish. Ask yourself if it's genuinely useful, interesting, or well-argued. If the answer is no, don't publish it. Second, develop your taste. Read widely. Study work you admire. Learn to articulate why something is good or bad, not just whether it "feels" AI-generated. As one observer noted, "Calling something 'AI slop' without explanation does not improve standards. It lowers them." Vague dismissal is its own form of low-effort content. Third, reward quality over volume. The incentive structures of social media and SEO have always rewarded quantity. That's not an AI problem, it's a platform design problem. Support creators who take the time to make something worthwhile, whether they used AI in the process or not. The flood of low-quality content isn't going to stop because we get better at detecting AI. It will slow down when humans decide they care about what they put out into the world. That's always been true. The tools change, but the responsibility stays with us.
References
- Merriam-Webster, "Word of the Year 2025: Slop" https://www.merriam-webster.com/wordplay/word-of-the-year
- Wikipedia, "Content farm" https://en.wikipedia.org/wiki/Content_farm
- Dr. Ethan Hein, "AI slop predates AI" https://ethanhein.substack.com/p/ai-slop-predates-ai
- Fabrizio Dell'Acqua et al., research on AI-assisted recruiters, Wharton School https://bigthink.com/the-present/why-great-ai-produces-lazy-humans/
- Nitin Nohria, "Good Taste Is More Important Than Ever," The Atlantic (June 2025) https://www.theatlantic.com/technology/archive/2025/06/good-taste-ai/683101/
- Reddit, "The real issue behind 'AI slop'" https://www.reddit.com/r/ArtificialInteligence/comments/1pn9zio/the_real_issue_behind_ai_slop/
- Wikipedia, "AI slop" https://en.wikipedia.org/wiki/AI_slop
- Priceless Misc, "The Curation Comeback: Why Human Judgment Matters More Than Ever in the AI Age" https://pricelessmisc.com/taste-is-your-only-advantage/