The state of AI slop
In December 2025, Merriam-Webster named "slop" its Word of the Year, defining it as "digital content of low quality that is produced usually in quantity by means of artificial intelligence." It was a fitting capstone to a year that saw AI-generated content flood every corner of the internet, from social media feeds to search results to the books on Amazon's bestseller lists. But slop isn't just a nuisance. It's reshaping the economics of attention online, degrading the training data that future AI models depend on, and forcing a fundamental rethink of how we distinguish real from fake. Here's where things stand.
The scale of the problem
The numbers are staggering. An Ahrefs study of nearly 900,000 newly created web pages in April 2025 found that 74.2% contained detectable AI-generated content. A separate analysis by Graphite, an SEO firm, found that by late 2024, more than half of newly published English-language articles were primarily AI-written, a trend that only accelerated into 2025. By early 2025, research from WINS Solutions showed that AI-written pages in Google's top 20 results had climbed from 11.1% to 19.6% between May 2024 and July 2025. The web is not just getting bigger. It's getting synthetic. Social media hasn't been spared. A CNET study found that 94% of US adults who use social media believe they encounter AI-generated content while scrolling. Only 11% found it entertaining, useful, or informative. The rest? Mostly annoyed.
What slop actually looks like
AI slop takes many forms. On social media, it's the uncanny "heartwarming" images of children building impossible sculptures, the plastic-looking AI portraits, and the brainrot videos where cartoon characters spout nonsensical catchphrases. On the web, it's the explosion of templated content farms that churn out thousands of articles designed to game search algorithms. But slop has crept into more consequential domains too. Cybersecurity researchers have reported being overwhelmed by AI-generated fake vulnerability reports submitted to bug bounty programs. Courts have dealt with lawyers who submitted AI-hallucinated case citations. College admissions offices are drowning in AI-written essays that all sound eerily similar. The through-line is the same: content generated at scale with little human oversight, prioritizing volume over substance.
The model collapse problem
Perhaps the most alarming long-term consequence of AI slop is what researchers call "model collapse." A landmark study published in Nature demonstrated that when AI models are trained on data generated by other AI models, their performance degrades over time. The models begin losing information about the tails of their training distributions, the rare and nuanced examples that make outputs rich and accurate. Eventually, the outputs converge to something bland and generic, carrying little resemblance to the original data. This creates a vicious feedback loop. AI generates content that floods the internet. Future AI models scrape the internet for training data. That training data is increasingly synthetic. The resulting models produce even more homogeneous output, which gets published, scraped, and trained on again. As a Harvard Journal of Law & Technology paper put it, the process is "similar to re-photocopying a picture several times." Each generation loses fidelity. AI developers are now scrambling to secure datasets of human-generated content collected before the generative AI explosion of 2022, treating pre-synthetic data as a kind of digital fossil fuel.
The economics driving the flood
Slop exists because it's profitable. Creating AI-generated content is fast, cheap, and scalable. A single operator can spin up thousands of websites, populate them with AI-written articles, and monetize them through programmatic advertising. DoubleVerify's Fraud Lab identified thousands of AI slop websites in just the first few weeks of 2026, many operating across multiple languages as part of coordinated schemes. The economics of attention reward volume. Social media algorithms surface content that generates engagement, and AI-generated curiosity bait, from impossible architecture to emotional manipulation, is engineered to trigger clicks and shares. The creators don't need the content to be good. They just need it to be seen. This dynamic has created what 404 Media described as a "brute force attack on the algorithms that control reality." When producing content costs nearly nothing, the incentive is to produce as much as possible and let the algorithms sort it out.
The Sora reckoning
On March 24, 2026, OpenAI announced it was shutting down Sora, the AI video app that had launched just six months earlier to massive fanfare. Sora had gone viral in late 2025 as a social platform for short-form AI-generated clips, but it also became a poster child for the deepfake and slop concerns that now define the medium. OpenAI said it was winding down Sora to refocus on other priorities, with the research team pivoting to "world simulation" work for robotics. The $1 billion content licensing deal with Disney, signed just three months earlier to bring Mickey Mouse, Cinderella, and Yoda into Sora-generated videos, was cancelled alongside it. According to the Wall Street Journal, Sora had become a money pit, reportedly burning roughly $15 million a day in inference costs against only $2.1 million in total lifetime revenue. The Sora web and app will be discontinued on April 26, 2026, with the API following on September 24, 2026. The shutdown of one of the most prominent AI video generators, just months after its consumer debut, underscores how quickly the economics of slop can unravel. Producing the content is cheap for users. Producing it at scale for a platform is ruinously expensive. And when the resulting feed is flooded with deepfakes, uncanny cartoons, and throwaway clips that nobody quite wants to watch, even a company with an $852 billion valuation can decide the math no longer adds up.
Fighting back, or trying to
The response to the slop crisis has been fragmented but accelerating. Google rolled out its February 2026 core update with explicit penalties for mass AI-generated content, introducing "information gain" as a ranking signal that rewards pages containing genuinely novel insights rather than regurgitated syntheses. It's an acknowledgment that the search giant's own results had been degraded. Platforms are experimenting with labeling and detection, but the results have been mixed. Meta has admitted it can't reliably detect AI-generated content on its platforms. Watermarking schemes have proven easy to remove or ignore. Instagram head Adam Mosseri suggested a provocative alternative: rather than trying to identify fake content, it may be more practical to "fingerprint" real content instead, essentially flipping the problem on its head by authenticating human-created media. On the detection side, tools from companies like Pangram Labs and others continue to improve, but they face a fundamental arms race. As AI-generated content gets better, detection gets harder. A Northeastern University research team working on measuring "slop" in text found that existing detection methods still struggle with nuanced or lightly edited AI content. Some of the most effective resistance has come from communities themselves. Reddit's partnership with Google and OpenAI has elevated user-generated discussion content in search rankings, implicitly signaling that authentic human conversation has value that AI articles lack. Brands that invested in genuine expertise and original perspectives are reporting stronger engagement than those relying on AI content mills.
What comes next
The slop problem isn't going away. If anything, it's likely to intensify as AI tools become more capable and accessible. The cost of producing passable content will continue to drop, and the volume will continue to rise. But several trends suggest the landscape will evolve in interesting ways. First, the market is beginning to price authenticity. Audiences are gravitating toward creators and publications with distinct voices and demonstrated expertise. The generic AI article is becoming background noise, easy to produce but hard to distinguish from thousands of identical pieces. Second, the model collapse research is pushing AI companies to take the slop problem seriously for purely self-interested reasons. If the internet's data commons gets too polluted, the models themselves suffer. This alignment of incentives, where AI companies need clean human data to survive, may prove more effective than any regulation. Third, the fingerprinting-real-content approach, if it gains traction, could create a new kind of trust infrastructure for the internet. Rather than playing whack-a-mole with fake content, we'd build systems that verify and elevate the real stuff. The state of AI slop in 2026 is messy, pervasive, and a little bit absurd. Merriam-Webster got it right: "Like slime, sludge and muck, slop has the wet sound of something you don't want to touch. Slop oozes into everything." The question now is whether we'll build the systems to contain it, or learn to live in a world where most of what we read was written by no one at all.