Nobody reads your README
Every engineering team says it. It's in the onboarding docs, the culture decks, the pull request templates: "We value good documentation." It's the universal lie of software development. Everyone agrees documentation matters, and almost nobody reads it. The README sits at the root of the repo like a welcome mat nobody wipes their feet on. The wiki page was last updated two engineers ago. The API docs reference endpoints that were deprecated in Q3. We've all been there. But something interesting is happening: AI might be making traditional documentation irrelevant, not by writing better docs, but by making the docs unnecessary in the first place.
The README paradox
The fundamental problem with READMEs is a targeting problem. The people who write them already understand the codebase. The people who need them are the ones least likely to find them, and least equipped to parse them when they do. A senior engineer writes a README after building a system. They compress weeks of context into a few hundred words, skip the parts that feel obvious to them, and move on. Three months later, a new hire opens that README and finds a document that assumes exactly the knowledge they don't have. They skim it, get confused, and ping someone on Slack instead. This isn't a writing problem. It's a structural one. Documentation is a snapshot of understanding at a single point in time, written by someone with maximum context for someone with minimum context. That gap almost never closes cleanly.
Documentation rot is inevitable
The moment you ship a document, it starts decaying. Code changes, APIs evolve, infrastructure migrates, but the docs sit still. The Stack Overflow blog put it well: in fast-paced development environments, particularly those adopting Agile methodologies, maintaining up-to-date documentation is challenging because developers deprioritize it due to tight deadlines and a focus on delivering working code. This isn't laziness. It's incentive design. Nobody gets promoted for updating a README. Shipping features is visible work. Keeping docs accurate is invisible work, until it isn't, and by then the damage is done. Teams stop trusting the documentation entirely. Once trust erodes, people stop checking docs at all, and you've lost the battle before it starts. The decay compounds. A developer finds one outdated section and assumes the rest is unreliable too. A support team fields the same questions repeatedly because the help articles still reference the old UI. New hires extend their ramp-up time by weeks because they can't tell which docs are current. Every outdated document erodes the credibility of all the others.
AI coding agents as implicit documentation
Here's where things get interesting. AI coding agents like Claude Code, Cursor, and others are changing the relationship between developers and codebases. Instead of reading a compressed summary of how something works, you can ask an agent to analyze the codebase in real time, trace git history for architectural intent, and generate setup instructions dynamically. As one developer noted, coding agents make traditional READMEs feel obsolete. The agent doesn't need a static document, it can read the code directly. Ask it "why is this service structured this way?" and it'll trace the evolution through commits. Ask it "how do I set up the dev environment?" and it'll inspect the actual configuration files, not a description someone wrote six months ago that may or may not still be accurate. This flips the documentation model. Instead of write-once-read-maybe, you get query-on-demand-always-current. The code itself becomes the documentation, with the AI as the interpreter.
MCP and tool descriptions as the new docs
The Model Context Protocol, developed by Anthropic, is quietly building a world where machine-readable descriptions replace human-readable documentation. MCP allows servers to expose tools that can be invoked by language models, with each tool uniquely identified by a name and including metadata describing its schema. Think about what that means. Instead of writing a README that says "this endpoint accepts a user ID and returns profile data," you define a tool with a typed schema, a description, and structured input/output contracts. The AI reads the schema directly. No ambiguity, no drift, no developer who skimmed the docs and missed a required parameter. The AGENTS.md convention takes this further. Already adopted by over 60,000 open-source repositories on GitHub, it's described as "a README for agents": a dedicated, predictable place to provide the context and instructions AI coding agents need to work on your project. It's documentation, but written for machines first, humans second. This represents a subtle but significant shift. When your primary documentation consumer is an AI agent rather than a human, the incentives change. Machine-readable documentation can be validated, tested against the actual codebase, and flagged when it drifts. Human-readable prose just sits there, slowly going stale.
When the artifact is the documentation
Keychron, the keyboard manufacturer, recently did something remarkable. They published the production-grade CAD files for over 83 of their keyboards on GitHub: cases, plates, keycaps, stabilizers, the works. STEP files, DXF files, engineering drawings. The actual factory blueprints. This is the logical extreme of "the artifact IS the documentation." You don't need a manual explaining the dimensions of a keyboard plate when you can open the exact CAD file used to manufacture it. The tolerances are exact because they came from the same data used to produce the originals. There's no documentation drift because the document and the product are the same thing. Software has an analog here. When your infrastructure is defined as code, the Terraform files are more accurate than any architecture diagram. When your API is defined with OpenAPI specs that generate both the server and the docs, the spec is the source of truth. The best documentation is the artifact itself, with tooling that makes it readable.
The counter-argument: hallucination is worse than no docs
Before we get too excited, there's a real problem. AI agents hallucinate. They confidently describe functions that don't exist, invent API parameters, and fabricate architectural decisions. An outdated README is bad, but at least it was accurate once. A hallucinated explanation was never accurate. This is especially dangerous in codebases with complex, implicit conventions. If a project has an unusual deployment process or a non-obvious reason for a particular architectural choice, an AI agent might fill in the gaps with plausible-sounding nonsense. A developer following hallucinated instructions could cause real damage, and they'd have more confidence doing so because the AI sounded authoritative. The ETH Zurich study on AGENTS.md files found that LLM-generated context files actually reduced task success rates by approximately 3% on average and increased inference costs by over 20%. Even human-curated files provided only a marginal 4% performance gain. The tooling isn't magic yet.
What documentation actually survives
So if READMEs rot and AI hallucinates, what actually works? The documentation that survives tends to share a few traits: it captures decisions rather than descriptions, it lives close to the code, and it answers "why" rather than "what." Architecture Decision Records are a good example. An ADR captures a single important architectural decision along with its context and consequences. It's not describing what the code does (the code already does that). It's explaining why a particular choice was made, what alternatives were considered, and what tradeoffs were accepted. That context is exactly what neither the code nor an AI agent can reconstruct after the fact. Decision logs work because they're immutable. You don't update an ADR when the world changes; you write a new one that supersedes the old one. There's no rot because the document was never meant to track a moving target. It's a historical record, and history doesn't go stale. The other survivor is documentation that's generated from the source of truth rather than maintained alongside it. Type definitions that produce API docs. Infrastructure-as-code that generates architecture diagrams. Test suites that serve as executable specifications. These don't decay because they're derived, not authored.
The practical path forward
None of this means you should delete your README and trust the vibes. The realistic path is a layered approach. For the "what" and "how" of your codebase, lean into AI-readable formats. Good type definitions, MCP tool schemas, AGENTS.md files, and well-structured code that an agent can interpret. These replace the getting-started guides and API references that nobody was reading anyway. For the "why" behind your architecture, write ADRs. Keep them in the repo, version-controlled, immutable. An AI can read your code and tell a developer what it does. It can't tell them why you chose Postgres over DynamoDB, or why the auth service is a separate deployment. Those decisions need to be recorded by the humans who made them. For everything in between, accept that some documentation will decay, and design for it. Date-stamp everything. Assign owners. Build review cycles. And increasingly, use AI tools to flag when docs have drifted from the codebase they describe. The README isn't dead. But its role is narrowing. The future of documentation isn't better writing. It's better artifacts, better tooling, and the honesty to admit that most of what we've been writing, nobody was reading anyway.
References
- Stack Overflow Blog, "Why do developers love clean code but hate writing documentation?" https://stackoverflow.blog/2024/12/19/developers-hate-documentation-ai-generated-toil-work/
- AGENTS.md, "A simple, open format for guiding coding agents" https://agents.md/
- InfoQ, "AGENTS.md Emerges as Open Standard for AI Coding Agents" https://www.infoq.com/news/2025/08/agents-md/
- Model Context Protocol specification, Tools https://modelcontextprotocol.io/specification/2025-06-18/server/tools
- Anthropic, "Introducing the Model Context Protocol" https://www.anthropic.com/news/model-context-protocol
- Augment Code, "How to Build Your AGENTS.md" https://www.augmentcode.com/guides/how-to-build-agents-md
- TechSpot, "Keychron shares 3D keyboard blueprints on GitHub, opening hardware to modders" https://www.techspot.com/news/112022-keychron-shares-3d-keyboard-blueprints-github-opening-hardware.html
- Keychron Hardware Design GitHub repository https://github.com/Keychron/Keychron-Keyboards-Hardware-Design
- ADR GitHub, "Architectural Decision Records" https://adr.github.io/
- Microsoft Azure Well-Architected Framework, "Maintain an architecture decision record" https://learn.microsoft.com/en-us/azure/well-architected/architect-role/architecture-decision-record