The world’s hardest interview
Software engineering might be the only profession where the interview is harder than the job itself. Doctors don't perform open-heart surgery during their residency interviews. Lawyers aren't asked to argue a Supreme Court case before they're hired. But software engineers? They're expected to solve algorithmic puzzles on a whiteboard, under time pressure, with someone watching their every keystroke, all for a job that will mostly involve reading documentation, debugging legacy code, and arguing about naming conventions in pull requests. The disconnect between what we test for and what we actually do has become one of the industry's most persistent and widely acknowledged problems.
The interview that bears no resemblance to the job
The typical software engineering interview loop at a major tech company looks something like this: a recruiter screen, a phone screen with a coding problem, then an on-site gauntlet of four to six rounds covering data structures, algorithms, system design, and behavioral questions. The centerpiece is almost always the LeetCode-style coding challenge, where candidates are expected to produce optimal solutions to abstract algorithmic problems in 30 to 45 minutes. The actual job looks nothing like this. Day to day, engineers read and modify existing code. They collaborate with teammates. They look things up in documentation. They use Google, Stack Overflow, and now AI assistants. They attend meetings, write design documents, review pull requests, and occasionally, they write new code, almost never from scratch and almost never involving the kind of algorithmic acrobatics that interviews demand. As one commenter on a popular programming forum put it: "99% of software engineering jobs don't look like LeetCode problems at all." The industry has built an elaborate filtration system that tests for a skill set that is largely orthogonal to what the job requires.
Why it got this way
The roots of the problem trace back to the early days of companies like Google, Microsoft, and Amazon. When you're receiving tens of thousands of applications for a limited number of positions, you need a scalable way to filter candidates. Algorithmic puzzles became that filter, not because they're the best predictor of job performance, but because they're easy to standardize and grade. The logic is built around minimizing false positives, meaning companies would rather reject ten qualified candidates than accidentally hire one unqualified one. And when you think about it from a pure cost perspective, this makes a certain kind of sense. A bad hire is expensive: onboarding costs, the drag on team productivity, the difficulty of managing someone out. So the system is calibrated to be skeptical by default. But this framing has a massive blind spot. It assumes that the ability to invert a binary tree on a whiteboard correlates with the ability to ship reliable software in a team environment. Google itself reportedly conducted an internal study and found that interview scores for people they ended up hiring had no meaningful correlation with their on-the-job performance. The very company that popularized this style of interviewing discovered it wasn't predictive.
The LeetCode industrial complex
The clearest sign that something has gone wrong is the existence of an entire industry built around interview preparation. LeetCode, HackerRank, AlgoExpert, Interview Kickstart, and dozens of other platforms exist because experienced, competent engineers cannot pass interviews at major companies without weeks or months of dedicated practice. Let that sink in. People who have been writing production software for years, who have shipped features used by millions, who have mentored junior engineers and led complex projects, need to set aside their evenings and weekends to grind through hundreds of practice problems just to get through the front door. This isn't testing for competence. It's testing for preparation. The candidates who perform best aren't necessarily the best engineers. They're the ones who had the time, resources, and motivation to treat interview prep like a second job. As one hiring manager observed, "LeetCode is useless because people can just grind it." The signal it provides is how much someone practiced for the interview, not how well they'll perform in the role.
The human cost
The toll on candidates is real and well-documented. Software engineering interviews create extraordinary anxiety because the stakes are high and the process feels arbitrary. You might be a senior engineer with a decade of experience, but if you blank on the optimal approach to a dynamic programming problem you haven't seen in five years, you're out. Experienced engineers routinely describe feeling like imposters during the interview process. They know they're good at their jobs. Their performance reviews confirm it. Their colleagues respect them. But the interview format makes them feel like they're starting from zero. One engineer described being asked about DevOps topics in a frontend interview, having to eventually ask, "Are you sure you're hiring for a frontend position?" There's also the "gotcha" mentality that permeates many interview cultures. Some interviewers seem more interested in finding what candidates don't know than in discovering what they do know. The process becomes adversarial rather than collaborative, which is ironic given that collaboration is arguably the most important skill for the actual job.
The AI wrinkle
The rise of AI tools has made the situation even more complicated. Companies are now concerned about candidates using AI to cheat during remote coding interviews. Reports suggest that a significant percentage of interviewers at major tech companies suspect candidates of using AI assistance during remote sessions. The response from many companies has been to make interviews harder, not better. Instead of rethinking whether LeetCode-style problems are the right tool, they're doubling down with tougher questions to try to outpace AI assistance. This creates a vicious cycle: harder interviews require more preparation, which increases the gap between interview performance and job performance, which makes the whole process even less predictive. Meanwhile, the actual job increasingly involves working alongside AI tools. Engineers are expected to use Copilot, ChatGPT, and other AI assistants in their daily work. So we're testing candidates on their ability to solve problems without AI, for a job that will require them to solve problems with AI. The irony is thick.
What better looks like
The good news is that alternatives exist. Some companies have already moved away from the LeetCode grind and are seeing results. Pair programming sessions involve working through a realistic problem together with an interviewer, using real tools, with access to documentation. This mirrors actual work and lets interviewers assess communication, debugging skills, and how candidates approach unfamiliar problems. Take-home projects give candidates a realistic task to complete on their own time, typically with a few days' deadline. The follow-up discussion about their solution, the tradeoffs they made, and how they'd extend it often reveals far more than a timed whiteboard session. Code review exercises present candidates with existing code and ask them to review it, identify issues, and suggest improvements. This is something engineers actually do every day and tests for practical judgment. Trial periods or contract-to-hire arrangements let both sides evaluate fit based on actual work. This is the ultimate test: can this person do the job? But it requires more commitment from both parties and isn't always practical. Google's own research team has advocated for structured interviewing, where every candidate gets the same questions evaluated against the same rubric, with interviewers trained to reduce bias. Even within the existing framework, adding structure dramatically improves the signal quality.
The real test
The fundamental question the industry needs to grapple with is this: what are we actually trying to learn about a candidate? If the answer is "can they solve abstract algorithmic problems under pressure," then the current system works fine. But if the answer is "can they build and maintain software effectively as part of a team," then we need to test for that directly. The best engineers I've worked with aren't necessarily the ones who can rattle off the time complexity of every sorting algorithm. They're the ones who ask good questions, who think carefully about edge cases in real systems, who write code that other people can understand, who know when to push back on requirements and when to ship something imperfect. None of that shows up in a 45-minute LeetCode session. Software engineering interviews are hard not because the job is hard in that specific way, but because we've built a testing regime that optimizes for the wrong things. The interview has become its own skill, separate from the skill of being a good engineer. And until the industry decides to close that gap, we'll keep losing great engineers to a process that was never designed to find them.
References
- The Software Engineering Interview Process is Broken, Here's Why (Scramble IT, Substack)
- Why Technical Interviews Are Broken (And What We Can Do About It) (Level Up Coding, 2026)
- Why Software Engineering Interviews Are Often Harder Than the Actual Job (We Are Developers, Medium, 2025)
- Six Coding Interview Formats to Replace LeetCode (Michael Hoffman, Medium)
- Why is the Coding Interview Broken? (Lomash Kumar, Medium)
- Use Structured Interviewing (Google re:Work)
- Engineers Who Interview Well but Underperform After Joining (Correct Context, 2026)
- The Problem with LeetCode Interviews (Melvin Oostendorp, 2024)
- How AI is Changing Tech Interviews (The Pragmatic Engineer)