Describe what you want
If you've ever worked with a client who couldn't articulate what they wanted, you know the pain. There's a whole genre of memes about it. The designer builds exactly what was asked for, the client hates it, and everyone blames the process. The real problem was never the process. It was the description. This problem hasn't gone away. It's gotten bigger. We now have tools that can build almost anything you can describe, from code to essays to entire applications. The bottleneck has shifted from execution to articulation. The most important skill in 2026 isn't knowing how to build. It's knowing how to describe what you want.
The old client problem
Software development has been wrestling with this for decades. The entire discipline of requirements engineering exists because people struggle to describe what they want. We invented functional requirements, user stories, acceptance criteria, wireframes, and prototypes, all because "I'll know it when I see it" doesn't scale. The consequences of vague descriptions are well documented. Scope creep, missed deadlines, budget overruns, products that technically meet the spec but miss the point entirely. A 2024 study published in ScienceDirect found that the quality of textual product requirements varies enormously across organizations, and that ambiguity in natural language specifications remains one of the top causes of downstream defects. We built entire careers around translating fuzzy human intent into precise technical specifications. Business analysts, product managers, UX researchers, they all exist because describing what you want turns out to be genuinely hard.
The same problem, amplified
Now we have LLMs. And suddenly, the ability to describe what you want isn't just a professional skill for product managers. It's a universal skill for anyone who uses a computer. LLMs are prediction engines trained on human language. They don't read your mind. They predict the most likely response given the input you provide. A vague prompt produces a vague output. A precise prompt produces a precise output. The quality gap between the two is enormous. Researchers at Emory University developed the DETAIL framework to measure exactly this effect. Their experiments across GPT-4 and O3-mini showed that prompt specificity directly improves accuracy, especially for procedural tasks. More specific descriptions consistently led to better results. This isn't surprising to anyone who has spent time with these tools, but it's useful to have the data confirm what experience already teaches. As one analysis put it, a well-engineered prompt can increase accuracy by 57% on smaller models and 67% on GPT-4. The difference between a good description and a bad one isn't marginal. It's the difference between useful output and garbage.
Prompting is a skill, not a trick
I think a lot of people still underestimate what it takes to prompt well. It's been years since ChatGPT launched, and if you haven't been using these tools daily, you've probably fallen behind on understanding what they can and can't do. Prompting isn't about memorizing templates or magic phrases. It's a skill built through experience, through hundreds of interactions where you learn how the model responds to different framings, how much context is too much, when to be explicit and when to let the model infer. You develop an intuition for what works, and that intuition only comes from repetition. The people who get the most out of LLMs aren't the ones who know a secret technique. They're the ones who can decompose a complex task into clear steps, provide the right context without overloading the request, and specify what success looks like. These are the same skills that make someone good at writing technical specs, managing projects, or communicating with a team. As Nate's Newsletter put it, most people think they're good at prompting when they're actually just good at chatting with AI. Chatting is rapidly becoming table stakes. The real leverage comes from structured specification, from describing what you want with enough precision that an autonomous agent can execute it without hand-holding.
We've come full circle
Here's what I find fascinating about all of this. The client problem, the one we spent decades building processes to solve, is the exact same problem we now face with LLMs. And the solution is converging on the same answer: write better specifications. Spec-driven development is one of the most significant trends in AI-assisted engineering right now. The idea is simple. Instead of iteratively prompting an AI and hoping it converges on what you want, you write a detailed specification first, then hand it to the AI to implement. GitHub released an open-source toolkit called Spec Kit built around this exact workflow. Birgitta Böckeler at ThoughtWorks identified three levels of the practice: spec-first, where you write the spec before coding; spec-anchored, where the spec is maintained alongside the code; and spec-as-source, where the spec becomes the primary artifact and humans never touch the code directly. This isn't a new idea dressed up in AI clothing. It's requirements engineering, rediscovered. The same discipline that gave us functional requirements documents and acceptance criteria is now the foundation for getting good output from AI coding agents. As Den Delimarsky wrote on the GitHub Blog, "code is really not the best medium for requirements negotiation." Specs describe intent in structured, testable language, and AI agents generate code to match. The companies and developers who are shipping the best AI-assisted work aren't the ones with the fanciest tools. They're the ones who got specific about what they wanted before they started building.
Why this matters beyond code
This isn't just about software development. The same principle applies everywhere LLMs are used. Writing a marketing brief for an AI assistant? The quality of the brief determines the quality of the output. Asking an AI to analyze a dataset? The precision of your question determines the precision of the answer. Using an AI agent to automate a workflow? The clarity of your instructions determines whether the agent does what you intended. Every interaction with an LLM is a description exercise. You're translating intent into language, and the model is translating that language into action. The gap between what you mean and what you say is where errors live. This is also why the skill compounds. The more domains you understand, the better you can describe what you want in each of them. A developer who understands both the technical implementation and the business context will write better prompts than someone who only understands one side. A marketer who understands their audience deeply will get better AI-generated content than someone who can only describe surface-level attributes.
The description gap
There's a real and growing gap between people who can describe what they want and people who can't. On one side, you have someone who spends eleven minutes writing a structured specification, hands it to an autonomous agent, and comes back to finished work that hits every quality bar. On the other side, you have someone who types a vague request, gets something 70% right, and spends forty minutes cleaning it up. Both are using the same model. Both are paying the same subscription. The difference is entirely in the description. This gap will only widen as AI tools become more capable. The ceiling on what you can achieve with a good description keeps rising. Better models don't eliminate the need for good descriptions, they amplify the returns from having one. A powerful model with a vague prompt still produces mediocre work. A powerful model with a precise specification produces something that would have taken weeks to build manually.
Practical implications
If there's one skill worth investing in right now, it's this: learn to describe what you want with precision. Start by being specific about outcomes. Don't say "make this better." Say what better means. Faster? More readable? More persuasive? For which audience? Under what constraints? Provide context that changes the answer. Not everything you know is relevant, but the details that affect trade-offs and priorities are essential. Share the constraints, the audience, the format requirements, the success criteria. Break complex requests into smaller, well-defined steps. A single prompt trying to do everything usually produces worse results than a sequence of focused prompts that build on each other. And most importantly, practice. The gap between a novice prompter and an experienced one isn't knowledge, it's reps. Every interaction teaches you something about how models interpret language, what level of detail they need, and where they tend to go wrong. The irony is beautiful. We spent decades trying to get clients to describe what they wanted. We built entire industries around requirements gathering and specification writing. And now, in the age of AI, the most valuable skill turns out to be exactly that. The ability to describe what you want. It was always the bottleneck. We just didn't realize it was the whole game.
References
- Kim, O. "DETAIL Matters: Measuring the Impact of Prompt Specificity on Reasoning in Large Language Models." Emory University, 2024. https://arxiv.org/html/2512.02246v1
- Zadenoori, M. A. et al. "Large Language Models (LLMs) for Requirements Engineering (RE): A Systematic Literature Review." 2025. https://arxiv.org/abs/2509.11446
- "Advancing Requirements Engineering with Large Language Models." ScienceDirect, 2025. https://www.sciencedirect.com/science/article/pii/S221282712500873X
- Böckeler, B. "Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl." Martin Fowler's Blog. https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html
- Delimarsky, D. "Spec-driven development with AI: Get started with a new open source toolkit." GitHub Blog, September 2025. https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/
- Aguilar, A. "The Complete Prompt Engineering Guide for 2025: Mastering Cutting-Edge Techniques." Medium. https://aloaguilar20.medium.com/the-complete-prompt-engineering-guide-for-2025-mastering-cutting-edge-techniques-dfe0591b1d31
- "Prompting just split into 4 different skills." Nate's Newsletter, February 2026. https://natesnewsletter.substack.com/p/prompting-just-split-into-4-different
- "26 principles for prompt engineering to increase LLM accuracy 57%." Codingscape. https://codingscape.com/blog/26-principles-for-prompt-engineering-to-increase-llm-accuracy
- "Spec-driven development: Unpacking one of 2025's key new AI-assisted engineering practices." ThoughtWorks. https://www.thoughtworks.com/en-us/insights/blog/agile-engineering-practices/spec-driven-development-unpacking-2025-new-engineering-practices
- Delimarsky, D. "Diving Into Spec-Driven Development With GitHub Spec Kit." Microsoft Developer Blog. https://developer.microsoft.com/blog/spec-driven-development-spec-kit