I’ve been building conversational systems for the better part of a decade. Watson Conversation, Watson Assistant, watsonx — I watched the naming conventions change almost as often as the underlying capabilities did. Through all of that, the core problem stayed the same: get a user from a question to an answer with as little friction as possible.
I was good at it. I understood intent classification inside and out. I could debug confidence scores in my sleep. I knew how to structure dialog trees that didn’t make users want to throw their laptop out a window. I’d built tooling, written about edge cases like compound questions and hex conversion tricks, and spent real time thinking about how to make these systems work for the people actually using them.
But somewhere in the last year or so, I started noticing that the problems I was most interested in weren’t really about conversation anymore.
The shift didn’t happen overnight. It started with the retrieval-augmented generation wave — suddenly the “knowledge” part of the system mattered as much as the conversational flow. Then tool use started getting serious. Models that could not just respond but act. Call an API. Query a database. Make a decision about what to do next based on context, not just what slot needed filling.
That’s when I realised I wasn’t thinking about chatbots anymore. I was thinking about agents.
The architecture problems are genuinely different. Orchestration, memory, planning, guardrails, human-in-the-loop design — these aren’t extensions of conversational AI. They’re a different discipline. One that borrows from it, sure, but the mental model is closer to distributed systems than dialog management.
I’ve been working in this space for a while now, quietly. Designing agentic architectures, thinking about how enterprises actually deploy these things without everything falling over. Solutions architecture for systems where the LLM isn’t the product — it’s a component in something larger. The interesting problems are in the wiring: how agents hand off to each other, how you maintain state across long-running workflows, how you build trust in systems that make autonomous decisions.
This blog has always been called “Talk to me,” and I’m not changing that. But the conversation has changed. The things I’ll be writing about going forward reflect where I actually spend my time — agentic design patterns, orchestration strategies, the real-world messiness of putting autonomous systems into production.
The Watson years gave me a foundation I still lean on every day. Understanding user intent, designing for failure, thinking about the human on the other end. That doesn’t go away just because the systems got more capable. If anything, it matters more now.
So consider this the pivot point. Everything before this was conversational AI. Everything after is what happens when the conversation starts doing things on its own.