The junior engineer problem is the conversation most engineering leaders are either avoiding entirely or having with insufficient honesty. I know this because I have been in many of those conversations, and I have had versions of them myself that I later recognised as insufficiently direct.

The reason for the avoidance is understandable. The problem is uncomfortable to name clearly, because naming it clearly means acknowledging a trade-off that most organisations are making implicitly without having agreed to make it explicitly: short-term output velocity in exchange for long-term capability development.

The research is now clear enough that this trade-off deserves to be named and examined rather than left to accumulate in the background.


What Anthropic’s own research found

In 2025, Anthropic published research on how AI assistance affects coding skill formation. The findings are worth reading carefully, particularly given the source - this is the company that makes Claude, with a clear commercial interest in AI coding tools being seen positively. The researchers nonetheless reported what they found.

Participants using AI assistance scored 17% lower on comprehension tests than those who did not - equivalent, the researchers noted, to nearly two letter grades. They completed tasks only marginally faster. The productivity gain that AI assistance is typically credited with was not statistically significant in the context of learning tasks.

The finding that stands out most is about debugging. The largest performance gap between AI-assisted and non-AI-assisted developers appeared in debugging questions. AI assistance particularly impairs the ability to identify and diagnose errors in code - precisely the capability that experienced engineers rely on most, and that junior engineers need to develop most urgently.

The counterintuitive piece - and this is the part worth holding onto - is that the study did not find that all AI use was equally damaging. It found that how engineers used the tools determined the outcome. High performers in the study used AI strategically: asking follow-up questions, requesting explanations, using the tool to explore concepts. Low performers delegated - they gave the tool a problem and accepted what came back.

The researchers’ framing is precise: AI accelerates productivity on existing skills but may hinder skill acquisition. These are different things, and the distinction has major implications for how you structure AI use in teams with junior engineers.


The employment signal you should not ignore

In the same year, data on the labour market for entry-level software developers began to emerge that is consistent with the mechanism the Anthropic research describes.

A Stanford Digital Economy Study, cited in Stack Overflow’s analysis of the developer labour market, found that employment among software developers aged 22 to 25 had fallen nearly 20% from its peak in late 2022 by mid-2025. Separately, entry-level technical hiring broadly declined 25% year-over-year in 2024.

The pattern that emerges from this data is consistent: the work that used to be done by junior engineers - the well-scoped implementation tasks, the small features, the isolated bug fixes - is increasingly being done by AI. The roles that remain, and the ones that are growing, require the ability to evaluate and orchestrate AI output rather than produce raw implementation.

This is the structural shift underneath the surface productivity numbers. It is not that junior engineers are less capable than they were. It is that the entry-level tasks that historically trained them are being automated, while the evaluation capabilities those tasks were building toward are now required earlier and with less runway to develop them.

The 2026 Anthropic Agentic Coding Trends Report reinforces this: as agentic tools take on longer autonomous workstreams, the skills premium shifts further toward the ability to specify problems clearly, evaluate outputs critically, and catch failures that the agent could not anticipate. These are skills built through years of doing the underlying work. They are not acquired by accepting AI suggestions.


The counterargument - and why it is partly right

There is a version of the pushback I hear regularly that goes something like this: every generation of engineers has been told that the abstraction layer they rely on is making them worse. Memory management was automated. ORMs replaced hand-written SQL. Developers stopped managing their own servers. Why should AI be different?

It is a fair challenge. The book addresses it in full, but the short answer is that previous abstractions removed mechanical toil while leaving reasoning intact. AI assistance is different in kind - it generates the reasoning, or at least the appearance of it. That distinction matters enormously for what junior engineers are or are not building when they use it.

The people making the historical analogy are pointing at something real, though: the skills profile of a successful engineer is not fixed. What it means to be a strong engineer in 2026 is genuinely different from what it meant in 2016. The question is not whether the profile changes - it will - but whether engineers arriving in the profession in 2026 are building the foundations for that evolved profile, or skipping them.


The learning accelerant argument - and what it requires

The Anthropic research contains a finding that is easy to miss because it is less dramatic than the headline numbers: junior engineers who used AI strategically - asking for explanations, exploring concepts, treating the tool as a tutor rather than a task-completer - did not show the same comprehension penalty. For this group, the research suggests AI can be a genuine learning accelerant.

This is not a small finding. It means the question is not “should junior engineers use AI tools?” It is “under what conditions does their use of AI build capability rather than substitute for it?”

The conditions that distinguish the two:

Purpose at the moment of use. Using AI to complete a task you do not understand compounds the knowledge gap. Using AI to understand something you are working through - getting it to explain an unfamiliar pattern, asking why one approach is preferred over another, having it walk through the failure modes of a piece of logic - is a different activity. The output is similar. The learning is opposite.

Ownership of the explanation. In my organisation, we require junior engineers to be able to explain every non-trivial decision in a PR they submit, including decisions that were AI-assisted. Not as an interrogation, but as a calibration check. If they cannot explain it, the PR comes back. The effect of this norm is significant: engineers who know they will need to explain their code read it more carefully. They question AI suggestions more often. They use the tool differently than they would if the only bar was “does it work?”

Structured time without the tools. I have written elsewhere about what I call structured non-AI time - deliberate periods where AI coding assistance is switched off, not as punishment, but as practice. The reaction when we introduced this was informative. Some engineers found it uncomfortable in a way that surprised even them. That discomfort is precisely what the practice is designed to surface and address. Debugging a problem for two hours without AI assistance, and then solving it, builds something that accepting an AI solution in thirty seconds does not.

The JetBrains State of Developer Ecosystem 2025 found that 85% of developers now regularly use AI tools for coding. Among that 85%, the variation in how thoughtfully they use them is enormous - and it is the single biggest predictor of whether individuals are building or eroding their underlying capability.

The full framework is in the book

Structured non-AI time, the code ownership structure, and the honest conversation with your business are developed fully in The AI-Ready Engineering Team.

Get the Book on Amazon

What this means for your progression framework

One concrete action that most organisations have not taken but should: update your criteria for what it means to be ready to progress from junior to mid-level in an AI era.

The pre-AI criteria for this transition typically included things like: can complete features with minimal oversight, produces code that passes review without significant rework, demonstrates understanding of the codebase architecture. These are still relevant. But they are no longer sufficient.

The additional criteria worth adding:

Can debug a production problem without AI assistance. Not because they would be expected to in a real incident - in a real incident, use every tool available - but because the ability to debug without the tool is the clearest signal that they have built the underlying mental models rather than depending on the tool to produce them.

Can identify failures in AI-generated code. Give them a piece of AI-generated code with a known problem. Can they find it? Can they explain why it is wrong? This is the capability that experienced engineers will rely on most as AI becomes more prevalent, and it is not built by accepting AI suggestions - it is built by understanding code deeply enough to evaluate it.

Can explain the architectural decisions in their own AI-assisted PRs. Not the code line by line, but the choices: why this approach rather than another, how it fits with the existing system, what trade-offs were made. An engineer who can do this has genuinely understood their work. An engineer who cannot has used AI as a task-completer.

Updating these criteria is not an HR administrative task. It is a signal to every engineer on your team about what is valued - and it gives junior engineers a clear picture of what they are developing toward rather than just what they are being asked to avoid.


The honest conversation with your business

At some point you will need to have a version of this conversation with your leadership team or board. It is uncomfortable because the short-term productivity signal from AI tools is visible and the long-term capability signal is not.

The honest version of the argument: AI tools are making our current engineers more productive. They are also reducing the learning that turns our junior engineers into senior engineers. If we do not invest deliberately in that learning - in structured practice, in mentoring bandwidth from seniors, in the willingness to accept that some junior output is supervised development rather than delivery - then in three to five years we will face a senior engineer deficit that no amount of AI tooling can compensate for.

The uncomfortable corollary: the organisations that are not managing this carefully are effectively running a labour market experiment on their own behalf, and the results from that experiment will show up in their teams’ capability in 2028 and 2029.

The risk is not hypothetical. The Anthropic research already demonstrates the mechanism in a controlled setting - comprehension drops, debugging capability atrophies, and the dependency builds faster than it is noticed. The timeline on which it shows up in team capability across the industry is the only thing that remains uncertain.


What the best junior engineers look like in 2026

I want to end with the positive version of this, because the risk-framing alone is only half useful.

The junior engineers who are thriving in 2026 - the ones building genuine capability rather than tool dependency - share a specific profile. They use AI tools extensively and fluently. They are not the resistors. But they use them differently than their peers who are building dependency.

They treat every AI suggestion as a first draft from a capable but context-free collaborator. They do not merge code they cannot explain. They use the tool to explore and understand, not just to generate. When the tool is wrong - and they notice it is wrong - they treat that as a learning opportunity rather than a reason to prompt again.

These engineers are also, in my experience, the ones most willing to spend time without the tools. They have enough experience using AI to know what it does well and where it fails, because they have tested both edges. That calibrated understanding is what separates an AI-ready engineer from an AI-dependent one - and it takes time, practice, and deliberate structure to build.

The leaders who will be glad they paid attention to this in 2026 are the ones whose teams in 2030 are led by engineers who built their foundations properly. The leaders who did not pay attention will be facing a senior engineer pipeline problem and wondering how it happened.

It happened in the decisions made - or not made - about how junior engineers used AI tools in the years when the habits were forming.


Research cited in this article

Research cited in this article includes Anthropic’s study on AI assistance and coding skill formation, the Stanford Digital Economy Study on developer employment trends (as cited in Stack Overflow’s analysis of AI and Gen Z developer careers), the JetBrains State of Developer Ecosystem 2025, and the 2026 Anthropic Agentic Coding Trends Report. The full framework for deliberate learning - structured non-AI time, the code ownership structure, the honest conversation with your business - is developed in The AI-Ready Engineering Team.


Russell Ward is an engineering leader and CTO with over 20 years’ experience building and scaling software engineering teams globally. He writes about engineering leadership, AI adoption, and distributed teams. Find him on LinkedIn.