Three weeks of work became two days. Twenty minutes of manual effort became three seconds.
Not by working harder or hiring more people. By changing the question from “how can AI do this work” to “how can AI build something that does this work.”
The migration problem
A common challenge we face with watsonx Orchestrate is migration. Clients have existing virtual assistants built on older platforms, and they need to move them across.
That means reading through the old assistant’s configuration, understanding the use cases it handles, and then rebuilding each one as a “flow”, a structured sequence of steps the new assistant follows.
It’s slow work even in the best case.
An engineer might spend days just understanding a handful of use cases from the documentation, then more time manually creating each flow, context switching between docs and the flow editor.
For a full migration, you’re looking at weeks.
And sometimes the original assistant wasn’t built as well as it could have been, maybe rushed out under pressure to get something live. When that’s the case, standard migration tooling falls down because what you’re migrating from was never well structured to begin with. That adds another layer to the problem.
Automate everything is a mantra I have had for as long as I can remember. AI can automate, but it’s expensive if you’re throwing tokens at the same kind of problem over and over. So instead of using IBM Bob to build each flow one at a time, what if I used Bob to build an application that builds the flows?
Two days later, I had it. An application that reads the old assistant’s files or documentation, identifies the use cases, and generates complete Orchestrate flows you can deploy and edit within the platform. It’s not perfect, probably never will be, but it doesn’t need to be. It needs to be fast and get you 80-90% of the way there so a human can refine the last mile.
Building this application from scratch, without AI, would have taken me about three weeks. With AI helping me build it, two days.
But that’s just the cost of creating the tool. The payoff is what happens every time someone uses it.
For a test, a use case that would analyse a submitted document, ask the user for information based on what it found, and then call an external service with that information. Doing this manually in the Orchestrate flow editor, knowing the platform well, took just under twenty minutes.
Using the application? Five steps of plain-language instructions (a minute), and the flow was created and deployed to a test server in two to three seconds. Ready to validate straight away.
Twenty minutes versus ~1 minute.
Every new use case that goes through the tool instead of being built by hand is another twenty minutes saved. Across a migration with dozens of use cases, you’re not saving hours. You’re saving weeks.
The right question
Instead of asking “how can I get AI to do this work?”, ask “how can I get AI to build something that does this work?”
The first question gives you a one-off result. The second gives you an instrument. Something you can use again and again, hand to someone else, or build on top of.
What this changes
When the cost of building a tool drops to near zero, you start building things you’d never have justified before.
That one-off migration project that needed a custom tool? You’d never have allocated three weeks of engineering time to build it. But two days? That’s a different calculation entirely.
It also changes who can build. Previously, building a custom migration application would have meant writing a requirements document, getting the bandwidth or resource to build it. All the time hoping your vision is reflected in what gets created.
Now the path from problem to working tool fits in a sitting, and something can be in clients’ hands before the old approach would have finished the planning phase.
The barrier between “I know what needs to exist” and “it exists” has almost disappeared. If your job involves understanding problems and designing solutions, this matters.
The delivery model changed
For most of the history of software delivery, writing code was the expensive part. You hired engineers, gave them requirements, and waited weeks or months for something you could test. The entire delivery model, the sprint cadence, the estimation rituals, the resourcing conversations, all of it was built around the assumption that building things takes a long time.
That assumption is breaking down.
With AI, the cost of producing working code has dropped close to zero. Not the cost of good software, but the cost of getting from idea to something that runs and can be evaluated. The gap between “we should try this” and “here, try this” has collapsed.
What speeds up
Build phases compress. Work that used to fill a two-week sprint can happen in an afternoon. A Solution Architect can now own the full delivery of a proof of concept, from requirements through to a working prototype, without waiting for engineering bandwidth.
The design-build-test cycle becomes something you can run multiple times in a day instead of once per sprint. Want to test three different approaches to a problem? You don’t have to pick one and commit. Build all three, evaluate, and move forward with the one that works.
This changes how you scope work. Estimation based on “how long will this take to build” starts to lose meaning when the build phase is measured in hours. The harder questions become: what should we build, and how will we know it works?
What stays the same
Requirements gathering is still a human job.
The AI doesn’t know what to build at a systems level. It doesn’t understand your non-functional requirements, your compliance constraints, your integration landscape. You still need someone who can look at a problem, understand the context around it, and define what “done” looks like.
I think this part actually becomes more important. When building is cheap, you can afford to build the wrong thing faster than ever. Clear requirements are the guardrail.
Testing and integration remain primarily human tasks.
LLMs tend to cheat when it comes to creating tests. They write tests that confirm the code works as written rather than tests that challenge whether it should work that way.
There’s a difference between “does this function return the expected output” and “does this system behave correctly when a user does something unexpected.” The first is easy to automate. The second requires someone who understands what can go wrong.
The pricing problem
Here’s where it gets uncomfortable for delivery organisations.
If the cost of writing code has dropped to near zero, and feedback loops have shrunk from weeks to minutes, you can no longer apply the normal timeframes to delivery. Clients will start asking why a proof of concept takes six weeks when the technology exists to produce one in days.
The honest answer is that much of what we charge for was never really about writing code. It was about understanding the problem, designing the right solution, integrating with existing systems, and making sure everything works under real conditions. Those things still take time.
But the optics have changed. When your client knows that AI can generate working code in seconds, a six-week timeline needs a clear justification for where that time actually goes. The teams that can articulate that clearly will be fine. The teams that can’t will find themselves in difficult conversations.
Where this leaves us
The delivery model is shifting from “how long to build” to “how fast can we learn.” The competitive advantage moves from execution speed to decision quality.
The tools have changed. The question is whether the process changes with them.
In a world where everyone has access to the same AI, the advantage doesn’t go to the person who uses it the most. It goes to the person who uses it to build the most useful things.
