People still write code by hand
If everyone around you is using AI to write code and you're not, you're not preserving a craft. You're falling behind. It's 2026, and a surprising number of developers still write every line of code by hand. Some do it out of pride. Some out of habit. Some because they don't trust the output. But whatever the reason, the window for treating AI-assisted coding as optional is closing fast.
The numbers don't lie
The shift is already well underway, and the data makes it hard to ignore. Google's CEO Sundar Pichai revealed during a Q3 2024 earnings call that more than 25% of all new code at Google is generated by AI, then reviewed and accepted by engineers. That number has only grown since. Some reports suggest AI now assists with closer to 50% of code at Google when you include character-level completions and suggestions. Google isn't an outlier. According to Sonar's 2025 State of Code report, developers estimate that 42% of the code they commit is now AI-assisted, and they expect that figure to reach 65% by 2027. JetBrains' 2025 State of Developer Ecosystem survey found that 85% of developers regularly use AI tools for coding and development, with 62% relying on at least one AI coding assistant daily. A study published in Science found that in the U.S., the share of new code relying on AI rose from 5% in 2022 to 29% in early 2025. That's a nearly six-fold increase in just three years. The trend is clear. AI-assisted coding is no longer experimental. It's operational.
We're at the floor, not the ceiling
Here's what makes this moment so important: the AI tools available today are the worst they will ever be. Every major model, whether it's GPT, Claude, Gemini, or the open-source alternatives, is improving at a rapid pace. Context windows are getting larger. Code understanding is getting deeper. The tools are learning to handle more complex, multi-file tasks with fewer errors. If AI can already help developers save an hour or more per week (and JetBrains reports that nearly 9 in 10 developers who use AI tools do), imagine what the next generation of these tools will look like. The developers who start building fluency now will compound that advantage over time. The ones who wait will have a steeper learning curve later.
Why some developers resist
It's worth understanding the hesitation. Not everyone avoiding AI tools is being stubborn. Trust is the biggest barrier. According to Sonar's research, 96% of developers say they don't fully trust AI-generated code. And the 2025 Stack Overflow Developer Survey found that 45% of developers cite "AI solutions that are almost right, but not quite" as their top frustration, with 66% saying they spend more time fixing nearly correct AI output. There are real risks too. A study found that 62% of AI-generated code solutions contain design flaws or known security vulnerabilities. AI models don't understand your application's threat model, internal standards, or compliance requirements. They can introduce subtle bugs that look correct on the surface but break under pressure. These are legitimate concerns. But they're arguments for using AI carefully, not for avoiding it entirely.
The real question: how to use AI safely
The answer isn't to hand the keys over to an LLM and walk away. It's to keep a human in the loop. Review everything. Google doesn't ship AI-generated code without engineer review, and neither should you. Treat AI output the way you'd treat a pull request from a junior developer: useful, but needs checking. Understand what you're shipping. AI can generate code faster than you can read it, which is exactly the problem. If you can't explain what a block of code does, you shouldn't merge it. The developers who benefit most from AI tools are experienced ones who can spot mistakes quickly, not beginners who can't tell good code from bad. Use AI for the right tasks. AI excels at boilerplate, repetitive patterns, test generation, and first drafts. It struggles with architecture decisions, complex business logic, and security-critical code. Play to its strengths. Layer in safety checks. Automated security scanning, thorough code review, and testing pipelines become even more important when AI is part of your workflow. The speed gains from AI are only valuable if you're not shipping vulnerabilities faster too.
Start now
This isn't a prediction about some distant future. It's already happening. 68% of developers expect employers to require AI tool proficiency in the near future. Companies are already factoring AI literacy into hiring decisions, the same way they once started expecting familiarity with Git or cloud platforms. The developers who treat AI as a tool in their belt, learning its strengths, understanding its limits, and building workflows that account for both, will be the ones who thrive. The ones who refuse to engage will find themselves writing the same code in twice the time, wondering why they keep getting passed over. You don't have to love it. You don't have to use it for everything. But you do have to start.
References
- Google Q3 2024 Earnings Call, Sundar Pichai on AI-generated code at Google: Alphabet CEO Letter, Q3 2024
- Ars Technica, "Google CEO says over 25% of new Google code is generated by AI": arstechnica.com
- Sonar, "State of Code 2025" developer survey report: sonarsource.com
- ShiftMag, "42% of Code Is Now AI-Assisted": shiftmag.dev
- JetBrains, "The State of Developer Ecosystem 2025": blog.jetbrains.com
- Stack Overflow, "2025 Developer Survey, AI Section": survey.stackoverflow.co
- Stack Overflow Blog, "Developers remain willing but reluctant to use AI": stackoverflow.blog
- EurekAlert / Science, "AI is already writing almost one-third of new software code": eurekalert.org
- Cloud Security Alliance, "Understanding Security Risks in AI-Generated Code": cloudsecurityalliance.org
- Forbes, "AI Powers 25% of Google's Code: What's Next for Software Engineers?": forbes.com