Is written by AI bad today?
Remember when ChatGPT first burst onto the scene and everyone became an amateur detective? Teachers were scrutinizing student essays, bosses were side-eyeing emails, and the internet collectively decided that the em dash was the smoking gun of AI-generated text. If your writing had too many of those long dashes, you were guilty until proven human. That was around 2022 and 2023. It's been about four years since then. The world has changed, and so has the conversation around AI-written content. So the question is: is it still bad to have something written by AI today?
The em dash era
The early days of mainstream generative AI were defined by suspicion. People wanted a simple test, a single tell that would expose machine-written text. The em dash became that tell. ChatGPT, it turned out, loved em dashes. The models used them constantly, and because most people don't have a dedicated em dash key on their keyboard, the reasoning went like this: if you see a lot of em dashes, it's probably AI. Reddit threads, OpenAI forums, and tech blogs all ran with the idea. One New York Times piece noted that "a weird consensus congealed: that humans do not use dashes," even though plenty of human writers have always used them freely. The irony was thick. Writers who had been using em dashes for years suddenly found themselves suspected of being bots. As one writer put it, an entire punctuation mark had been "invalidated." The em dash wasn't really a reliable signal at all, it was just the most visible quirk of early AI output that people could latch onto.
AI detection turned out to be harder than anyone expected
Beyond the em dash, a whole industry of AI detection tools sprang up. Turnitin added AI detection features. GPTZero, Copyleaks, and others promised high accuracy rates. Schools and publishers rushed to adopt them. The results were, to put it gently, inconsistent. A Los Angeles Times investigation found that the same piece of text could receive a 92% "human" score from one tool and a 99.7% "likely AI" score from another. Fully human-written content was flagged as AI. AI-generated content with minor edits passed without issue. MIT Sloan's teaching resources advised educators that "AI detection software is far from foolproof" and noted that OpenAI itself shut down its own detection tool because of poor accuracy. Academic research backed this up. A 2024 study found that 94% of undergraduate exams written by ChatGPT went undetected by graders at a British university. A 2025 study confirmed that people can distinguish AI-generated text from human writing only slightly better than chance. A CISPA study with around 3,000 participants across Germany, China, and the US found that people were largely unable to tell AI-generated content from human-generated content across text, images, and audio. The detection arms race, in other words, has not produced a clear winner. The tools keep improving, but so do the models they're trying to catch.
The shift from "is it AI?" to "is it useful?"
While the detection debate raged on, something quieter happened: people just started using AI. A lot. According to a Menlo Ventures survey, 61% of American adults have used AI in the past six months, with nearly one in five relying on it daily. Scaled globally, that translates to roughly 1.7 to 1.8 billion people. McKinsey's 2025 State of AI report found that nearly nine out of ten organizations are regularly using AI. In content marketing specifically, 97% of marketers planned to use AI to support content efforts in 2026, up from 64.7% in 2023. This isn't fringe adoption. It's mainstream. AI is being used to draft emails, summarize documents, brainstorm ideas, write code, generate marketing copy, and much more. The question has quietly shifted from "was this written by AI?" to "is this actually good?"
The stigma hasn't disappeared, but it has changed shape
That said, the stigma around AI-written content hasn't vanished entirely. It has just become more nuanced. A 2025 Pew Research survey found that 53% of Americans believe AI will worsen people's ability to think creatively. There's a genuine concern, not necessarily about the text itself, but about what relying on AI does to the person using it. Are we outsourcing our thinking? Are we losing the ability to write, reason, and create on our own? In education, the tension is still real. Universities are still debating disclosure policies. A ResearchGate discussion from early 2025 asked whether it's time to "move beyond stigmatizing AI tools" or whether disclosure should remain mandatory. The OECD's 2026 Digital Education Outlook is exploring frameworks for responsible AI use in learning. The consensus is slowly forming that the focus should be on the quality and integrity of the output, not the tool used to produce it, but that consensus isn't universal yet. In professional contexts, the mood has shifted more decisively. Grammarly's 2025 trend report noted that consumers are "increasingly comfortable interacting with AI but want to know when AI has generated experiences and content." Transparency, not avoidance, is becoming the expectation. A Forbes piece went further, arguing that consumers "not only appreciate AI-generated content, they also trust it and want more of it."
What actually matters now
The real conversation in 2026 isn't about whether AI wrote something. It's about a few more specific questions: Is the content accurate? AI models still hallucinate. They can sound confident while being completely wrong. The responsibility for fact-checking hasn't gone away, it's just moved from the tool to the person using it. Is the content thoughtful? There's a difference between using AI to help structure your thinking and having AI do all the thinking for you. Readers can often sense when something feels generic or hollow, even if they can't pinpoint why. The best AI-assisted content still has a human perspective running through it. Is the use transparent? In contexts where trust matters, like journalism, academia, and professional advice, disclosing AI use is becoming a baseline expectation. Not because AI is shameful, but because honesty about process builds credibility. Does it respect the reader's time? AI makes it trivially easy to produce large volumes of text. That doesn't mean you should. The flood of low-quality, AI-generated SEO content has made readers more skeptical of anything that feels like filler. Quality still wins.
The new normal
Four years after ChatGPT launched, the answer to "is AI-written content bad?" is the same as the answer to "is content written with a word processor bad?" The tool doesn't determine the quality. The person using it does. The em dash panic, the detection arms race, the moral hand-wringing, these were all stages of a society coming to terms with a genuinely new technology. That adjustment period isn't fully over, but the direction is clear. AI is becoming just another part of how people write, think, and create. The stigma is fading, replaced by a more practical set of questions about accuracy, transparency, and value. The people who will thrive aren't the ones avoiding AI or the ones blindly relying on it. They're the ones who use it thoughtfully, who bring their own judgment and perspective to the process, and who understand that the real measure of good writing has never been about the tool. It's always been about the thinking behind it.
References
- "With the Em Dash, A.I. Embraces a Fading Tradition," The New York Times, September 2025. Link
- "The Em Dash Dilemma: How a Punctuation Mark Became AI's Stubborn Signature," Brent Csutoras, Medium. Link
- "How Accurate Are AI Detectors in 2025?," Los Angeles Times. Link
- "AI Detectors Don't Work. Here's What to Do Instead," MIT Sloan Teaching & Learning Technologies. Link
- "Too Many Em Dashes? Spotting Text Written by Chatbots Is Still More Art Than Science," Indiana Capital Chronicle, August 2025. Link
- "New Results in AI Research: Humans Barely Able to Recognize AI-Generated Media," CISPA. Link
- "2025: The State of Consumer AI," Menlo Ventures. Link
- "The State of AI: Global Survey 2025," McKinsey & Company. Link
- "51 AI Writing Statistics To Know in 2026," Siege Media. Link
- "How Americans View AI and Its Impact on People and Society," Pew Research Center, September 2025. Link
- "Should the Use of ChatGPT in Academic Writing Be Disclosed, or Is It Time to Move Beyond Stigmatizing AI Tools?," ResearchGate discussion, January 2025. Link
- "14 AI Trends to Watch For in 2025," Grammarly. Link
- "AI-Generated Content: The Future Consumers Love, Trust and Demand," Forbes, July 2025. Link
- "How Do Professors Detect AI in 2026?," Thesify. Link
- "It's Starting to Look Like We'll Never Come Up With a Good Way to Tell What Was Written by AI," Fortune, December 2025. Link