The Junior Engineer Gap

Senior engineers are farming their implementation work to AI. That’s efficient for them. But the work they’re offloading is the same work that junior engineers used to learn on.

So where does that leave the juniors?

The learning path disappeared

Juniors learn on the job. That’s how it has always worked.

You start with small tasks, make mistakes in a safe environment, get feedback from someone more experienced, and gradually take on more complex work. The tasks themselves are the training ground.

When a senior engineer can hand that work to an AI and get it back in minutes, the junior has nothing to cut their teeth on. The role as it existed, doesn’t exist anymore. Not because juniors aren’t needed, but because the entry point they relied on has been automated away.

The dangerous part

There’s a worse version of this scenario.

Junior engineers who grow up with AI tools will naturally use them to do the work. That’s fine if you already understand the fundamentals. If you know what good looks like, you can evaluate what the AI gives you.

But if you don’t know what you don’t know, you’ll accept whatever the AI produces and assume it’s correct.

“OSHA laws are built on bodies”

Safety standards in every industry exist because someone got hurt first. Software doesn’t have the same physical consequences, but the principle holds. Systems built by people who can’t recognise what can go wrong will eventually go wrong. The question is how much damage that causes when it happens.

So what do you do about it?

This isn’t a problem that fixes itself. If your company is moving faster because of AI, they may not give you the time to learn that you would normally get on the job. The learning that used to happen naturally now has to be deliberate.

Think of it like going to the gym. Nobody gets fit by accident. You have to make time for it and show up consistently.

Be a generalist with depth

The T-shaped skillset has been talked about for years, but the shape is changing. Being broad across the top isn’t enough anymore. You need varying levels of depth down multiple verticals.

If you only know one thing deeply, AI can probably do that one thing. If you understand how several domains connect, how security affects architecture, how regulations shape design, how integration constraints influence what’s possible, that’s harder to automate. The value is in the connections between areas, not mastery of one.

Learn how to question the machine

When an AI gives you an answer, don’t just use it. Pick one or two points from the response and go deeper. Research them independently. Understand why the AI said what it said, and whether it’s actually right.

This builds two skills at once. You learn the subject matter, and you develop an instinct for when the AI is wrong. Both of those become more valuable over time, not less.

Context engineering matters

Understanding how to shape what the model knows is becoming a skill in its own right. What you put in the context window, how you structure it, what you leave out, all of this affects the quality of what comes back.

This is less about prompt engineering (writing clever instructions) and more about context engineering (giving the model the right information to work with). The people who understand this will get better results from the same tools everyone else is using.

Language, regulations, and ethics

These are the areas where humans stay in the loop longest.

Understanding how to communicate clearly, knowing the regulatory landscape your work operates in, and being able to reason about the ethical implications of what you’re building.

AI can help with all of these, but it can’t own them. The accountability still sits with a person.

For junior engineers looking for where to invest their time, these areas will hold their value longer than any specific technical skill.

The responsibility isn’t only on juniors

If you’re senior, this is your problem too.

The juniors coming up behind you are the ones who will eventually maintain what you build. If they never developed the foundational understanding because the learning path was automated away, that becomes everyone’s problem.

Finding ways to keep juniors in the loop, giving them meaningful work that AI assists rather than replaces, and creating space for learning even when the pressure is to move fast.

That’s part of the job now.

Build the Tool, Not the Thing

Three weeks of work became two days. Twenty minutes of manual effort became three seconds.

Not by working harder or hiring more people. By changing the question from “how can AI do this work” to “how can AI build something that does this work.”

The migration problem

A common challenge we face with watsonx Orchestrate is migration. Clients have existing virtual assistants built on older platforms, and they need to move them across.

That means reading through the old assistant’s configuration, understanding the use cases it handles, and then rebuilding each one as a “flow”, a structured sequence of steps the new assistant follows.

It’s slow work even in the best case.

An engineer might spend days just understanding a handful of use cases from the documentation, then more time manually creating each flow, context switching between docs and the flow editor.

For a full migration, you’re looking at weeks.

And sometimes the original assistant wasn’t built as well as it could have been, maybe rushed out under pressure to get something live. When that’s the case, standard migration tooling falls down because what you’re migrating from was never well structured to begin with. That adds another layer to the problem.

Automate everything is a mantra I have had for as long as I can remember. AI can automate, but it’s expensive if you’re throwing tokens at the same kind of problem over and over. So instead of using IBM Bob to build each flow one at a time, what if I used Bob to build an application that builds the flows?

Two days later, I had it. An application that reads the old assistant’s files or documentation, identifies the use cases, and generates complete Orchestrate flows you can deploy and edit within the platform. It’s not perfect, probably never will be, but it doesn’t need to be. It needs to be fast and get you 80-90% of the way there so a human can refine the last mile.

Building this application from scratch, without AI, would have taken me about three weeks. With AI helping me build it, two days.

But that’s just the cost of creating the tool. The payoff is what happens every time someone uses it.

For a test, a use case that would analyse a submitted document, ask the user for information based on what it found, and then call an external service with that information. Doing this manually in the Orchestrate flow editor, knowing the platform well, took just under twenty minutes.

Using the application? Five steps of plain-language instructions (a minute), and the flow was created and deployed to a test server in two to three seconds. Ready to validate straight away.

Twenty minutes versus ~1 minute.

Every new use case that goes through the tool instead of being built by hand is another twenty minutes saved. Across a migration with dozens of use cases, you’re not saving hours. You’re saving weeks.

The right question

Instead of asking “how can I get AI to do this work?”, ask “how can I get AI to build something that does this work?”

The first question gives you a one-off result. The second gives you an instrument. Something you can use again and again, hand to someone else, or build on top of.

What this changes

When the cost of building a tool drops to near zero, you start building things you’d never have justified before.

That one-off migration project that needed a custom tool? You’d never have allocated three weeks of engineering time to build it. But two days? That’s a different calculation entirely.

It also changes who can build. Previously, building a custom migration application would have meant writing a requirements document, getting the bandwidth or resource to build it. All the time hoping your vision is reflected in what gets created.

Now the path from problem to working tool fits in a sitting, and something can be in clients’ hands before the old approach would have finished the planning phase.

The barrier between “I know what needs to exist” and “it exists” has almost disappeared. If your job involves understanding problems and designing solutions, this matters.

The delivery model changed

For most of the history of software delivery, writing code was the expensive part. You hired engineers, gave them requirements, and waited weeks or months for something you could test. The entire delivery model, the sprint cadence, the estimation rituals, the resourcing conversations, all of it was built around the assumption that building things takes a long time.

That assumption is breaking down.

With AI, the cost of producing working code has dropped close to zero. Not the cost of good software, but the cost of getting from idea to something that runs and can be evaluated. The gap between “we should try this” and “here, try this” has collapsed.

What speeds up

Build phases compress. Work that used to fill a two-week sprint can happen in an afternoon. A Solution Architect can now own the full delivery of a proof of concept, from requirements through to a working prototype, without waiting for engineering bandwidth.

The design-build-test cycle becomes something you can run multiple times in a day instead of once per sprint. Want to test three different approaches to a problem? You don’t have to pick one and commit. Build all three, evaluate, and move forward with the one that works.

This changes how you scope work. Estimation based on “how long will this take to build” starts to lose meaning when the build phase is measured in hours. The harder questions become: what should we build, and how will we know it works?

What stays the same

Requirements gathering is still a human job.

The AI doesn’t know what to build at a systems level. It doesn’t understand your non-functional requirements, your compliance constraints, your integration landscape. You still need someone who can look at a problem, understand the context around it, and define what “done” looks like.

I think this part actually becomes more important. When building is cheap, you can afford to build the wrong thing faster than ever. Clear requirements are the guardrail.

Testing and integration remain primarily human tasks.

LLMs tend to cheat when it comes to creating tests. They write tests that confirm the code works as written rather than tests that challenge whether it should work that way.

There’s a difference between “does this function return the expected output” and “does this system behave correctly when a user does something unexpected.” The first is easy to automate. The second requires someone who understands what can go wrong.

The pricing problem

Here’s where it gets uncomfortable for delivery organisations.

If the cost of writing code has dropped to near zero, and feedback loops have shrunk from weeks to minutes, you can no longer apply the normal timeframes to delivery. Clients will start asking why a proof of concept takes six weeks when the technology exists to produce one in days.

The honest answer is that much of what we charge for was never really about writing code. It was about understanding the problem, designing the right solution, integrating with existing systems, and making sure everything works under real conditions. Those things still take time.

But the optics have changed. When your client knows that AI can generate working code in seconds, a six-week timeline needs a clear justification for where that time actually goes. The teams that can articulate that clearly will be fine. The teams that can’t will find themselves in difficult conversations.

Where this leaves us

The delivery model is shifting from “how long to build” to “how fast can we learn.” The competitive advantage moves from execution speed to decision quality.

The tools have changed. The question is whether the process changes with them.

In a world where everyone has access to the same AI, the advantage doesn’t go to the person who uses it the most. It goes to the person who uses it to build the most useful things.