Illia Smoliienko, Chief Software Officer, Waites.
In 2024, Google CEO Sundar Pichai said that AI was already generating more than 25% of the code for Google’s products, with engineers reviewing and directing the output. In August 2025, Harvard researchers found that companies actively integrating AI into their workflows see junior headcount drop by roughly 9% compared with firms that don’t. They simply stop opening the positions.
On the surface, this looks rational. Why invest in roles that don’t pay off right away and need a long runway of training, when AI can take on part of the work? But these are the roles where expertise takes shape and where people learn to read context and make decisions. If they disappear, who will be running the teams that are automating so efficiently today, 10 years from now?
In this article, I want to dig into exactly how AI automation affects the leadership pipeline.
You can’t generate experience.
An entry-level developer used to do low-stakes grunt work: small bugs, minor tweaks to functionality, simple tests and documentation. AI handles much of that now, and the work available to juniors has shrunk accordingly.
The logical result is a shrinking of entry-level roles—the positions whose responsibilities are the first to get automated. In the U.S., entry-level job postings are down 35%. According to venture firm SignalFire, new-graduate hiring at the 15 largest tech companies by market cap has fallen by more than 50% since 2019. Before the pandemic, graduates made up about 15% of total hires. Today it’s 7%.
Under pressure to show productivity gains, tech teams don’t really have a choice. Investors tend to reward AI adoption as a way to grow revenue and cut costs.
But there’s a catch. Working effectively with AI requires the ability to critically assess what it gives you, and that takes judgment—the ability to make sound calls when there’s no obviously right answer. So where does judgment come from?
Judgment is built on the job, through entry-level work. The routine tasks now being automated were the training ground where junior engineers learned to think in systems: how components fit together, where bottlenecks show up, which decisions hold an architecture together and which break it. A junior who made their own calls and watched the consequences play out gradually built a feel for what a good outcome looks like and how to get there.
A muscle you don’t train will atrophy.
I once asked the software tech lead on my team how AI tools had changed mentoring. In one sense, he said, they had made his job easier; juniors came to him less often with basic questions because they asked ChatGPT instead. But on harder problems, something else was happening.
Developers would show up with answers the AI had given them and present them as the right solution, without being able to explain why. The answer might work, but that isn’t enough. An engineer has to see how their solution will affect the architecture, whether it introduces new dependencies and whether it creates technical debt down the line.
The way junior engineers learn is also changing. Instead of working their way toward a solution, they increasingly work with one that has already been generated, and they fall into what I call the false expertise trap. When an answer arrives quickly and sounds convincing, it feels like you understand the problem more deeply than you actually do.
Right now, AI is doing two things at once: accelerating experienced specialists while taking from beginners the experience that makes the work meaningful. Over time, people will end up managing processes they don’t fully understand.
Gartner predicts that by the end of 2026, 50% of global organizations will introduce “AI-free” assessments to gauge the actual level of independent thinking on their teams. But seeing that the level has dropped is one thing. Knowing how to bring it back is another.
What can you do so your team keeps growing?
The shrinking of entry-level roles doesn’t look critical yet, but its long-term consequences are hard to gauge. Still, the way AI is already reshaping how teams work is signal enough: If we don’t rethink how we develop people now, in a few years, companies may be short on people who can make decisions under uncertainty and take responsibility for them.
That means deliberately building conditions where people keep growing instead of handing their thinking off to AI. Here’s what I’ve found works:
• Teach people to argue with AI. On my team, we have a rule: Don’t treat an AI answer as a finished solution. Ask why. How did you arrive at this? What are the downstream effects? What are the alternatives? Once that becomes a habit, people don’t lose their engineering instincts—they start considering a wider range of options.
• Create room for independent decisions. Give juniors problems without an obvious answer: an intermittent bug with no clear cause, a choice between two architectural approaches with real trade-offs or a production incident without a tidy playbook. Situations where AI can suggest an option, but a human has to own the call.
• Mix experience levels around real problems. Judgment is built by watching how an experienced person thinks through a hard moment—where they pause to ask a clarifying question and when they decide to act without the full picture. That happens when junior and senior engineers work together, not as mentor and student, but as a team with different levels of context.
• Create “AI-free zones” for development. Give the team problems they have to solve without AI. Not as punishment or as a rejection of the technology, but as a deliberate change of pace. Otherwise the ability to work independently atrophies.
The best AI strategy isn’t only about the technology. It’s also about the people—about who you’re raising up to run it.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
