Six months later…

I had planned to create an update frequently, but life and more importantly work got in the way. With the new role, a lot of focus is on the other aspects of AI related technologies.

Of course while things are different for me, the chat bot world continues on. Watson Conversation becomes Watson Assistant. With a huge number of updates and changes.

My blog continues to be a source for numerous people starting with Watson Assistant. But Watson development have made changes that makes most redundant.

So until my next update (soon, I promise) let me give a brief update to every blog entry and give Watson Development the credit they deserve.

Note: I mention Beta a lot. In your workspace UI, you now are able to request access to the Watson Assistant Beta, and try out all the new features that are coming. I also didn’t reference every blog entry. If it’s not in the list it’s probably still the same, or not important enough to mention.

Testing your Intents

K-Fold and blind testing is still very much a part of ensuring you have trained the system well. But for those who are currently playing with the Beta will know, there are features coming that reduce this need or even make it redundant (jury is still out on this. I’m favoring the latter).

There are number of Watson Assistant K-Fold and testing apps up on github, if you don’t want to try decipher my example.

Pushing my Buttons

While I can see a need for buttons in some cases, I still believe it destroys the conversational aspect of a chat bot. Overuse turns your chat bot into an app, and damages training. I am also against having in a conversational text response.

Chihuahua or Muffin, revisited.

Huge updates have been made to Watson Visual Recognition (VisRec). The main being it is now integrated into Watson Studio. If you haven’t tried Watson Studio yet, go now! 🙂 It is a Data Science / Machine Learning / Deep Learning development platform.

Watson VisRec still continues to amaze people at how quickly and accurately it can classify custom content, we also have Watson Media which can annotate live video. If RAW POWER is needed, there is PowerAI Vision, which allows for real time classification on video. We are talking “Person of Interest” level classification. 🙂

I have no confidence in Entities.

I have nothing but love for Entities now! Gone are the “keyword” type entities. Now they are built using a ML NLP model. Not only that, it also helps to dramatically improve the training of Intents.

There are now Pattern Entities as well. These allow for complex regular expressions.

The design pattern of using entities to lower the confidence of an intent is still valid.

Manufacturing Intent

Creating manufactured questions is still very much an issue you would want to avoid.

That said, you may have seen Project Debater. While this is personal opinion (and no guarantee it will ever be a feature), I can see this technology augmenting the intent and answer creation of Watson Assistant.

Anaphora? I hardly knew her.

This is still very much a good and well used pattern. I’d love if Watson Development wrapped it into Watson Assistant, but for now it helps easily adding intelligence into your conversation.

Removing the confusion in Intents.

While this can help still in training, the current beta has features which will likely make this redundant.

Watson Conversation just got turned up to 11.

This should be “Watson Assistant” 😉

Slots have continued to improve. You can now create more complex slot responses and handlers. Allowing to jump to nodes within the slot itself, rather then create a conditional tree.

Digressions is also a new feature which allows you to dictate when the user can go off script, and pull them back.

I love Pandas

While I still love using Pandas, Watson Assistant logging analytics has improved dramatically. Again there are features coming in the Beta which make this even better for training your enterprise level chat bots.

For those in Premium IBM Cloud there is also a feature for it to recommend topics you have never seen before, and to help collate questions for intent training of new topics.

I have a Dream …

Since writing that blog post, Watson Speech to Text now uses Deep Learning to understand human speech. There is also a feature of “Acoustic models”, which allows you to train on accents in relation to your domain language.

Compound Questions

This is still a good pattern for compound questions. The current beta features may make this redundant in certain designs.

Improving your Intents with Entities.

With the huge improvements to Entities, I would consider this an anti-pattern. So ignore.

Conversing in three dimensions.

This is still a valid pattern. In fact I’ve seen it used in a number of very successful implementations. Again though, playing with the beta will show this may become redundant from a coding perspective.

Data Science Experience

It’s now “Watson Studio”! There are so many new features in this. The most important being the support of Deep Learning.

Since I wrote that blog you have the following new features (may be more).

  • SPSS modeler. Similar to the desktop version, but allows you to use the raw power of IBM cloud. You can still import/export your streams between the desktop version.
  • Data Refinery. There is a quote that 60% of all data science work is cleaning up data. This helps in doing this.
  • Watson Machine Learning. You give it your data, tell it what you want to use to determine the outcome. It will then run through multiple machine learning models, to find the best one that will work for your data. Something that requires a ML expert to do for even mundane stuff. It even creates the API and a test UI for you.
  • Experiments. Allows you to run numerous models and tweak to find the best one for your data.
  • Neural Network Modeler. Allows you to build your tensorflow / pytorch / keras / caffe models in a simple GUI. It will then write your code which you can export to your applications. Here is a good article on it.

Much more than this as well. Try it out, it’s free. 🙂

Building a Conversation interface in minutes.

Watson Assistant now has the ability to create a number of conversation interfaces through the workbench (little to no coding in some instances). For example, Facebook and Slack.

Understanding how a conversation flows

This is redundant. Actually I can argue that most of what I wrote in 2016 is redundant now.

Conversation now has Folders which allows you to check a branch of nodes, but then continue the flow of the tree, instead of falling back to root.

Nodes now have a “Skip Input” which means you don’t have to put a Jump to get into the branch (which is prone to breaking if you have to add more nodes)

Digressions allow you to jump around looking for the answer and return to where you left off.

… So there you have it. Hopefully everyone is up to date. 🙂 See ya soon.

One thought on “Six months later…

Leave a Reply