Why every AI product looks the same
Open any AI product released in the last two years and you will notice something strange. They all look the same. A text box at the bottom. A scrolling thread of messages. Maybe a sidebar with conversation history. Whether it is a coding assistant, a writing tool, a customer support bot, or a research companion, the interface is almost always a chat window. We have the most transformative technology in a generation, and the best we have come up with is a slightly fancier group chat. This is not a criticism. It is an observation about where we are in the cycle. And honestly, it is one of the most exciting parts.
The gravitational pull of chat
When ChatGPT launched in late 2022, the chat interface was a stroke of genius. It was instantly familiar. Everyone knows how to type a message and hit send. The barrier to entry was zero, and that accessibility is what made LLMs a consumer phenomenon almost overnight. But what started as a smart default quickly became a gravitational well. Every new AI product copied the pattern, not because chat was the best interface for their use case, but because it was the safest bet. Investors understood it. Users recognised it. Designers could ship it fast. The result is a sea of products that feel interchangeable. As one observer put it, 95% of AI-generated UIs look exactly the same, recycling the same themes instead of inventing new ones. Swap the logo and the colour scheme and you would struggle to tell most AI tools apart.
Why the sameness runs deeper than UI
The visual similarity is just the surface. Underneath, many of these products are architecturally identical too. A large number of AI startups are what the industry calls "LLM wrappers", applications that layer a thin interface on top of the same foundation models from OpenAI, Anthropic, or Google. The core intelligence is rented, not owned. This creates a compounding sameness problem. When thousands of products use the same underlying model, the outputs converge. The reasoning patterns are similar. The tone of voice is similar. The limitations are similar. Differentiation becomes cosmetic: a different system prompt, a branded colour palette, a niche marketed to a specific vertical. The AI wrapper debate has sparked real anxiety in the startup world. If OpenAI or Anthropic can replicate your feature with a single model update, what exactly is your moat? The honest answer for many products is: not much.
The real problem is a lack of imagination
Cristóbal Valenzuela, CEO of Runway, made an interesting point when asked about AI's sameness problem. He compared it to handing an IMAX camera to someone who is not a filmmaker. The output will not match what Christopher Nolan can produce, not because the technology is limited, but because the creative vision behind it is. The sameness is a human problem, not a technology problem. This applies directly to product design. The chat interface is a default, not a destiny. It works brilliantly for some things, open-ended exploration, brainstorming, quick questions, but it is a poor fit for many others. As designer Amelia Wattenberger has argued, text inputs have no affordances. They give the user zero indication of what is possible, what the system can do, or what a good interaction looks like. We are defaulting to chat because we have not yet done the hard creative work of imagining what else is possible.
The experiments happening at the edges
The good news is that some teams are starting to break free. The most interesting AI products right now are the ones rejecting the chat-first paradigm entirely. Generative UI is one of the more promising directions. Instead of responding with text, the AI generates interactive interface components on the fly, forms, visualisations, tools, whatever the task actually requires. Google has started rolling this out in Gemini and Search, creating custom visual experiences for each prompt rather than forcing everything through a text stream. Agentic interfaces take a different approach. Rather than waiting for you to type a command, the system monitors context, anticipates needs, and takes action with your permission. The interaction model shifts from "ask and receive" to "delegate and review." Products like Notion's own AI agents, or tools that can browse the web and execute multi-step workflows, are early examples of this pattern. Ambient AI pushes even further. Rocket Money, for instance, analyses your transactions and surfaces insights without you ever opening a chat window. The AI is doing work in the background, and the interface is just the results. No prompt required. Spatial and contextual interfaces are emerging too. Instead of a linear thread, some tools are experimenting with canvas-based interactions, where AI outputs live alongside your own work in a shared space that you can arrange, connect, and build on. These are not mature paradigms yet. They are experiments. And that is exactly the point.
We are in the exploration phase
Here is what I find genuinely exciting about this moment. We have this shiny new technology, and we are still figuring out the best way to interact with it. Every wave of computing goes through this. Early websites looked like digital brochures. Early mobile apps were just shrunken desktop software. It took years of experimentation before we arrived at the interaction patterns that felt native to each medium. Pull-to-refresh, infinite scroll, swipe navigation, these did not exist on day one. They were invented through trial and error by teams willing to break from convention. We are in that same exploratory phase with AI. The chat window is our digital brochure moment. It works, but it is not the final form. Sandboxes, generative UI, agentic workflows, copilot patterns, ambient intelligence, these are all attempts to discover what the native interaction model for AI actually looks like. The tools we use every day might not be the final form. And that is fine. We are getting there, one experiment at a time.
What this means if you are building
If you are building an AI product right now, the temptation to ship another chat interface is strong. It is fast, familiar, and fundable. But the products that will matter in five years are the ones solving for the interaction, not just the intelligence. A few principles that seem to separate the interesting work from the noise:
- Start from the task, not the model. What is the user actually trying to accomplish? A chat box might be the answer, but so might a dashboard, a canvas, a notification, or nothing visible at all.
- Reduce the prompt burden. The best AI interactions require the least typing. If your user has to write a paragraph to get value, your interface is doing too little work.
- Make AI actions visible. When AI does things in the background, show the receipts. Trust comes from transparency, not from hiding complexity behind a friendly avatar.
- Embrace constraints. Paradoxically, limiting what the AI can do in a given context often makes it more useful. A general-purpose chat can do anything, which means it guides you toward nothing.
The window is wide open
For once in the history of computing, the playbook has not been written yet. The companies and builders who will define the next decade of software are not the ones with the best models. They are the ones with the best ideas about how humans and AI should work together. We are all standing at the same starting line. The models are available to everyone. The APIs are commoditised. The real differentiator is taste, craft, and the willingness to try something that does not look like everything else. Every AI product looks the same right now. That is not a problem. It is an invitation.
References
- Alex Kantrowitz, "AI's Sameness Problem," Big Technology, October 2025
- S M Roqunuzzaman, "Every Design Looks the Same Now. AI Tools Are Why," Medium, February 2026
- Amelia Wattenberger, "Why Chatbots Are Not the Future of Interfaces," wattenberger.com
- Alex Fuentes, "The Evolution of AI Interfaces: From Chat to Generative UI," Medium
- Google Research, "Generative UI: A Rich, Custom, Visual Interactive User Experience for Any Prompt," research.google, November 2025
- Pete Trainor, "The Best Interface Is Invisible: Rethinking UX and Design for Agentic AI," Medium, February 2026
- Jeffrey Bowdoin, "Beyond the Blank Slate: Escaping the AI Wrapper Trap," jeffreybowdoin.com
- Carolyn Nieberding, "Beyond Chat: The 3 Modalities of AI You Should Be Using," Medium, November 2025
- Arin Bhowmick, "9 UX Design Shifts That Will Shape 2026," Forbes, December 2025
You might also enjoy