… This is the ninth in a series of posts about how I ended up where I am today.
Coming back from Dubai in the middle of a pandemic and stepping into a worldwide role as a Senior Solutions AI Architect felt like starting again in some ways. The energy of the Lab, the face-to-face work with government teams, the physical presence in a place that was trying to build something ambitious. All of that was gone. Now it was video calls and global time zones.
The focus of the role was building first-of-a-kind solutions for customers, and later enterprise-scale deployments using the watsonx portfolio. But the thing that defined those years more than any single project was the speed at which everything underneath kept changing.
Classic ML to Generative AI to Agentic
When I started in the worldwide role, the work was still grounded in what you’d call classic machine learning. Models trained for specific tasks, carefully tuned, deployed with guardrails that were well understood. The patterns were established. You knew what worked and what didn’t.
Then generative AI arrived and rewrote the playbook. Suddenly the models weren’t just classifying or predicting. They were creating. The conversations with customers shifted. The architecture patterns shifted. The expectations shifted. Things that had been theoretical became possible almost overnight, and the challenge moved from “can we do this?” to “should we do this, and if so, how do we do it responsibly at scale?”
And then the agentic wave started building. Models that don’t just generate but act. Systems that plan, use tools, make decisions, hand off to other systems. The architecture problems got more interesting and more consequential at the same time.
Through all of this I had to keep up. Not just with the technology itself, but with how it changed what customers needed, what solutions looked like, and what “good” meant in a world where the goalposts moved every few months.
What a Team of Weeks Became a Day for Two
The acceleration wasn’t abstract. You could feel it in the work. Something that would have taken a team weeks to build and deploy a couple of years ago could now be done in days by one or two people. The tooling got better, the models got more capable, the patterns got more reusable. Every cycle compressed the one before it.
That compression changed what a Solutions Architect actually does. The job stopped being about knowing all the answers and became about knowing which questions to ask, how to evaluate what’s possible now versus what will be possible in six months, and how to design systems that won’t collapse when the technology underneath them takes another leap.
Looking at the Thread
Across all of this I’ve worked with hundreds of people from different countries, with wildly varying skills, but all with a passion for what they do. Some of them remind me of where I was at the start of my journey. I try to help those people become better than me. Others are people I wish I could emulate even half of what they do. They push me to be more than I am. That exchange, that lifting each other up, has been the constant through every role and every country.
The continual learning is something I love. The technology never stands still and neither can I. Even if AI eventually does everything for us, I hope I never stop wanting to understand how it works and what it means.
Looking back across everything, from the pixel map of Ireland to the German laser printer to automating Lotus localisation to routing support tickets with NLP to building chatbots in Dubai to designing worldwide AI solutions, there’s a thread. I’ve always been most interested in the space between what technology can do and what people actually need it to do. The gap. The wiring. The part where you take something powerful and make it useful.
That’s what drew me to agentic systems. The technology is more powerful than anything I’ve worked with. But the problems are the same ones I’ve been solving my whole career. How do you build something that works for the person on the other end? How do you design for the things that will go wrong? How do you make sure the human stays in the picture?
The tools have changed. The question hasn’t.
This is the ninth in a series of posts about how I ended up where I am today. Next: the pivot, and what happens when the conversation starts doing things on its own.