Is that AI?
Something changed in the way we look at things. A stunning photo, a moving essay, a piece of cover art, and the first reaction isn't wonder or curiosity. It's suspicion. "Is that AI?"
The new default reaction
It crept up slowly, then all at once. Somewhere between the first viral deepfakes and the thousandth AI-generated LinkedIn post, we developed a new reflex. We stopped asking "who made this?" and started asking "what made this?" It doesn't matter if it's a photograph of a sunset, a heartfelt caption, or a piece of concept art that someone spent weeks on. The question arrives before appreciation does. Before we let ourselves feel anything about what we're looking at, we need to rule something out first. And that's the part that stings.
When humans get accused of being machines
In mid-2025, video game players accused the developers of Little Droid of using AI to create their cover art. The art appeared in the game's launch trailer on YouTube, and the internet was quick to call it out. Except it wasn't AI. It was carefully designed by a human artist. The developers had to publicly defend the work as authentically human-made. This is the strange new territory we're in. Real human work now has to prove it's real. The burden of proof has flipped. We used to assume things were made by people unless told otherwise. Now we assume the opposite.
The trust numbers tell a story
The shift isn't just vibes. Research backs it up. A study published in PNAS Nexus found that people are less likely to believe or share content labeled as "AI-generated," even when that content is actually true, or even when it was made by a human. The mere suggestion of AI involvement was enough to trigger skepticism. Meanwhile, consumer preference for AI-generated content has dropped sharply. According to influencer marketing agency Billion Dollar Boy, only 26% of consumers now prefer generative AI creator content, down from 60% in 2023. That's a massive reversal in just two years. And when AI attempts emotional content, the response is even harsher. Research published in the Journal of Business Research found that AI-authored emotional communications can trigger what researchers call "moral disgust," reducing brand loyalty and positive word of mouth. People expect emotional expression to come from emotional beings.
The authenticity tax
There's a cost to all of this, and it's not just paid by AI companies. It's paid by every person who creates something genuine and now has to wonder whether anyone will believe them. Artists watermark their process videos. Writers disclaim that they didn't use ChatGPT. Photographers post their RAW files. Musicians show their session footage. The creative process has acquired a new, exhausting step: proving you're human. This is what you might call the authenticity tax. It's the extra labor required to convince an increasingly skeptical audience that your work is yours. And it falls hardest on independent creators who don't have established reputations to lean on.
How did we get here?
The honest answer: speed and volume. Generative AI made it trivially easy to produce content that looks polished. Social feeds filled up with AI-generated images, articles, and videos, much of it unlabeled. People got burned. They shared things that turned out to be fake. They felt foolish. So they overcorrected. Now everything gets the side-eye. The problem isn't that people are skeptical of AI. Skepticism toward AI output is healthy and probably necessary. The problem is that the skepticism has become so broad that it's eroding trust in everything, including the real, human, imperfect things that deserve to be appreciated on their own terms. As researchers at the University of Melbourne have noted, this growing distrust doesn't just affect our media consumption. It can degrade trust in public discourse and even personal relationships. When you can't be sure if anything is real, the default becomes doubt.
What we lose when we stop trusting our eyes
There's something deeply human about encountering a piece of work and being moved by it. A photograph that captures a fleeting moment. A paragraph that articulates something you've felt but couldn't name. A song that makes you pull over your car. The "is that AI?" reflex short-circuits that experience. It inserts a verification step between encounter and emotion. And even if the answer turns out to be "no, a person made this," something is lost in the asking. The spell is already broken. We've trained ourselves to be detectors before we allow ourselves to be audiences.
Finding a way back
This isn't a call to be naive. AI-generated content does require scrutiny, especially in news, politics, and anywhere trust matters. But there's a difference between healthy media literacy and reflexive cynicism. A few things that might help:
- Better labeling standards. If platforms and creators consistently label AI-generated content, audiences won't have to guess. Transparency reduces the need for suspicion.
- Valuing process over polish. The messiest, most imperfect human work is often the most resonant. Leaning into the rough edges, the stuff AI can't replicate, may become the strongest signal of authenticity.
- Giving the benefit of the doubt. Before asking "is that AI?", try asking "does this move me?" If it does, maybe that's enough to start with.
The real question
"Is that AI?" is a reasonable question in 2026. But it shouldn't be the first question. And it definitely shouldn't be the only question. The sadder thing isn't that AI exists. It's that we've let it colonize our instincts to the point where wonder requires a disclaimer. Where beauty is guilty until proven innocent. Maybe the better question, the one worth asking more often, is simpler: "What do I feel when I look at this?" And then letting yourself sit with the answer before reaching for the doubt.
References
- Coghlan, S. & Sparrow, L. (2025). "Distrust in AI is on the rise, but along with healthy scepticism comes the risk of harm." The Conversation / University of Melbourne. Link
- Altay, S. et al. (2024). "People are skeptical of headlines labeled as AI-generated, even if true or human-made." PNAS Nexus, 3(10). Link
- Journal of Business Research (2024). The AI-authorship effect on consumer responses to emotional content. Link
- Digiday (2025). "After an oversaturation of AI-generated content, creators' authenticity and 'messiness' are in high demand." Link
- Saxena, P. (2025). "Authenticity in the Age of AI." California Management Review. Link
- PRMS (2025). "A Trust Crisis: The Mental Health Implications of AI's Erosion of Reality." Link