Debugging your extension

When working with extensions in Watson Assistant just using the standard UI can be cumbersome to do deep dive analysis on why your extension is not working as expected.

You can use the browsers inspector to look at what is sent and received. You do this by going to the network tab, select “Response” then filter by “callout”. Once you get to the line where callout is mentioned remove the filter and then you can see all the parts.

For the video demo below I created a sample extension that pulls jokes from “I Can Haz Dad Joke” via their API. The sample extension is attached.

Creating a Quantum Computer Chatbot.

I normally do up these small and quick projects to help practice technologies I work with. Also to keep me a bit sane.

For this fun little project I thought about creating a chatbot that can translate a simple conversation into a format that can be understood by a quantum computer.

The plan is to build a Grover Algorithm circuit that will determine the best combination of people who like/dislike each other.

The architecture is as follows:

Breaking down each component.

  • IPad App (Swift): Why? Because Javascript annoys me. 🙂 Actually creating apps is very easy and swift is a lovely language. If you haven’t coded in it and want to, I recommend the App Brewery training.
  • Orchestration Layer (Python/Flask): My focus was on speed and python has all the modules to easily interact with everything else. Perfect for backend demo.
  • Watson Assistant: This is to handle the human interaction. Also to pull out the logical components and actors mentioned in the conversation.
  • Equation Generator: When the user asks to solve the problem, it translates the Watson Assistant results to an equation that Qiskit can run on.
  • Quantum Engine: This is just a helper class I created to build and run the quantum circuit, and then hand the results off to the reporting NLP. Of course what comes back is all 1’s and 0’s.
  • Reporting NLP: This takes the result of the quantum computer and converts it into meaningful report to the human. This is then handed back to the iPad App to render.

All this was built and running in a day. It’s not because I’m awesome 😉 but that the technology has moved forward so much that much of the heavy lifting is handled for you.

I’m not going to release the code (if you want some code, why not try pong? I wrote over the weekend). I will go over some of the annoyances that might help others. But first a demo.

This is a live demo. Nothing is simulated.

Watson Assistant

This was the easiest and most trivial to set up. Just three intents and one entity. The intents detect if two people should be considered friendly or unfriendly. The names of the two people is picked up by the entities. The last intent just triggers the solve process.

Equation Generator

This is a lot less exciting than it sounds. When sending a formula to qiskit it needs to take it in a format like so:

((A ^ B) & (C & D) & ~(D & A))

Which is something like “Bob hates Jane, Mike likes Anna, Mike and Bob don’t get on” in normal human speech.

Each single letter has to equate to each person mentioned. So those have to be kept track of as well as the relationships to build this.

Quantum Computing

So Qiskit literally holds your hand for most of this. It’s a fun API. If you want to start off learning Quantum Computing I strongly recommend “Learn Quantum Computing with Python and IBM Quantum Experience“. It focuses from a developer perspective, making it easier to start working through the math later.

To show how simple it is, Qiskit has a helper class called an Oracle. This is literally all the code to build and run the circuit.

# example expression
expression = '((A ^ B) & (C & D) & ~(D & C))'

oracle = LogicalExpressionOracle(expression)
quantum_circuit = oracle.construct_circuit()

quantum_instance = QuantumInstance(BasicAer.get_backend(quantum_computer), shots=2048)

grover = Grover(oracle)
result = grover.run(quantum_instance)

What you get back is mostly 1’s and 0’s. You can also generate graphs from the helper class, but they tend to be more for the Quantum Engineer.

Reporting

I used the report generated by Qiskit. But as the results are all 0/1 and backwards. I translate them out to the ABCD… and then added a legend to the report. That was all straight forward.

The tricky bit came in sending the image back to the iPad app. To do this I converted the image to base64 like so (Using OpenCV):

def imageToBase64(img):
    b = base64.b64encode(cv2.imencode('.png', img)[1]).decode()
    return f'{b}'

On the Swift side of things you can convert the base64 string back to an image like so.

func base64ToImage(_ base64Text: String) -> UIImage {
    let imageData = Data(base64Encoded: base64Text)
    let image = UIImage(data: imageData!)
    return image!
}

Getting it to render the image in a UIViewTable was messy. I ended up creating a custom UIViewTableCell. This also allowed to make it feel more chat-botty.

When I get around to cleaning up the code I’ll release and link here.

In closing…

While this was a fun distraction, it’s unlikely to be anything beyond a simple demo. There already exists complex decision optimization that can handle human interaction quite well.

But the field of quantum computing is changing rapidly. So it’s good to get on it early. 🙂

Rendering Intents in a 3D network graph

It’s a been a while…

I know some people were asking how to build a network graph of Intents/Questions. Personally I’ve never seen it at all useful, but I am bored, so I have created some sample code to do this.

The code will convert a CSV intents file to a pandas dataframe, then convert that to networkx graph format. Of course large graphs can be very messy like so.

That’s just 10 intents!

So I then converted this to K3D format so that you can view the same network in 3D, like so:

Hopefully someone finds it useful. 🙂

Multi-Lingual chat bot with cloud functions.

Bonus post for being away for so long! 🙂 Let’s talk about how to do a multi-lingual Chatbot (MLC).

Each skill is trained on its own individual language. You can mix languages into a single skill, however depending on the language selected the other is treated as either keywords or default language words. This can be handy if only certain language words are commonly used across languages.

For languages like say Korean or Arabic, it gives a limited ability versus say English. For something like English with Spanish, it simply does not work.

There are a number of options to work around this.

Landing point solution

This is where the entry point into your chatbot defines the language to return. By far the easiest solution. In this case you would have for example an English + Spanish website. If the user enters the Spanish website, they get the Spanish skill selected. Likewise with English.

Slightly up from this is having the user select the language at the start of the bot executing. This can help where the anonymous user is forced to one language website, but can’t easily find how to switch.

The downside for this solution is where end users mix language. Somewhat common in certain languages. For example Arabic. They may type in the other language only to get a confused response from the bot.

Preparation work

To show the demo, I first need to create two skills. One in English and one in Spanish. I select the same intents from the catalog to save time.

I also need to create Dialog nodes.. but that is so slow to do by hand! 🙁 No problem. I created a small python script to read the intents and write my dialog nodes for me like so:

Here is the sample script for this demo. Totally unsupported, and unlikely to work with large number of intent. But should get you started if you want to make your own.

Cloud functions to the rescue!

With the two workspaces created for each language we can now create a cloud function to handle the switching. This post won’t go into details on creating a cloud function. I recommend the built in tutorials.

First in the welcome node we will add the following fields.

Field Name Value Details
$host “us-south.functions.cloud.ibm.com” Set to where you created your cloud function
$action “workspaceLanguageSwitch” This is the name of the action we will create.
$language “es” The language of the other skill.
$credentials {“user”:”USERNAME”,”password”:”PASSWORD”} The cloud function username + password.
$namespace “ORG_SPACE/actions” The name of your ORG and SPACE.
$workspace_id “…” The workspace ID of the other skill.

Next we create a node directly after the welcome one with a conditional of “!$language_call” (more on that later). We also add action code as follows.

The action code allows us to call to the cloud function that we will create.

The child nodes of this node will either skip if no message comes back, or display the results of the cloud function.

On to the cloud function. We give it the name “workspaceLanguageSwitch”.

This cloud function does the following.

  • Checks that a question was sent in. If not it sends a “” message back.
  • Checks that the language of the question is set to what was requested. For example: In the English skill we check for Spanish (es).
  • If the language is matched, then it sends the question as-is to the other workspace specified. It also sets “$language_call” to true to prevent a loop.
  • Returns the result with confidence and intent.

Once this is created, we can test in the “Try it out” screen for both Spanish and English.

Here is all the sample code to try and recreate yourself.

It’s not all a bed of roses.

This is just a demo. There are a number of issues you need to be aware of you take this route.

  • The demo does a one time shot. So it will only work with Q&A responses. If you plan to use slots or process flows, then you need to add logic to store and maintain the context.
  • Your “Try it out” is free to use. Now that you have a cloud function it will cost money to run tests (all be it very cheap).
  • There is no fault control in the cloud function. So all that would need to be added.
  • Cloud functions must complete execution within 5 seconds. So if your other skill calls out to integration, it can potentially break the call.

What did you say?

A question recently asked was “How can I get Watson Conversation to repeat what I last asked it”? There are a couple of approaches to solve this, and I’d thought I would blog about them. First here is an example of what we are trying to achieve. 

One thing to understand when going forward. Everything you build should be data driven. So while there are valid use cases where this is needed, it doesn’t mean it is needed for every solution you build, unless evidence exists otherwise. 

Approach 1. Context Variable.

In this example we create a context variable at every node where we want the system to respond, like so: 

This works but prevents easily creating variations of the response. On the plus side you can give normal responses, but when the user asks to repeat, it can give a fixed custom response. 

Approach 2. Context variable everything!

Similar to the last approach except rather than creating the context variable in the context area, you build on the fly. So something like so:

This allows you to have custom responses. A disadvantage (all be it minor) is you are increasing the chance of a mistake in your code happening. Each response is adding 4 bytes to your overall skill/workspace size. This means nothing for small workspaces, but when you are enterprise level you need to be careful. 

I’ve attached a sample of above

Approach 3. Application layer.

With this method your application layer keeps a previous snapshot of the answer. Then when a repeat intent is detected, you just return the saved answer. 

I’ve seen some crazy stuff of resending question, modifying node maps, but really this is the simple option. 

Optimizing intents with no K-Fold

So as I blogged about earlier, to help find conflicting training questions in your intents you would normally use K-Fold. There are issues in using this though.

  • It removes some of your training data that can weaken intents.
  • You have to balance number of folds for accuracy over speed.
  • Requires to make multiple workspaces.
  • Once you have created the results, you have to figure them out.

Looking back at an old blog post on compound questions, I realized that this already sees where a question can be confused with different intents.

So the next step was to determine to equate this to intent testing. Removing a question and retraining is not an option. It takes too long, and offers nothing over the existing K-Fold testing.

Sending the question back as-is will always return a 100% match. Every other intent gets a 0.0 score, so you can’t see confusion. But what if you changed the question?

First up, I took the workspace from the compound questions. Despite being a tiny workspace it works quite well. So I had to manufacture some confusion. I did this by taking a question from one intent and pasting it into another (from #ALLOW_VET to #DOG_HEALTH).

  • Can my vet examine the puppies?

Next up for the code we define a “safe word”. This is prepended to any question when talking to Watson Assistant. In this case I selected “SIO”.  What was interesting when testing this is that even if the safe word doesn’t make sense, it can still impact the results on an enterprise level (ie. 1000’s of questions).

We end up with this response:

  • SIO Can my vet examine the puppies?
  • ALLOW_VET  = 0.952616596221924
    DOG_HEALTH = 0.9269126892089845

Great! We can clearly see that these intents are confused with each other. So using the K-Means code from before we can document actual confusion between intents.

I’ve created some Sample Code you can play with to recreate.

Just in case you are not aware, there is also a Watson Assistant premium feature which will do something similar. It’s called “conflict resolution” (CR). Here is a good video explaining the feature.

Compared to that and K-Fold…

PRO

  • This sample code is faster and cleaner than K-Fold.
  • Narrows down to an individual question.
  • (with customizations) Can show multiple intent confusion.
  • Unlike K-Fold, later tests only require you to test new & updated intents.

CON

  • Considerably slower than CR. When using CR, it only takes a few seconds to test everything. The sample can take up to a second per question to test.
  • Costs an API call for each question test.
  • Can’t test the model where the question would not be in the training. (What K-Fold does)
  • CR is slightly more accurate in detection. (based on my tests)
  • CR allows you to swap/edit questions within the same UI. The example I posted requires manual work.

 

To see the world in a grain of sand…

Just a short blog update to inform everyone that I have moved over to a new role in IBM. I now work in IBM Dubai as a Technical Solutions Manager, assisting with the somewhat recently announced ai Lab.

For me in a sense it means more (relative) free time, as a lot of my time was devoted to travelling previously. But I am just getting ramped up, so apologies again if I still maintain my one post a month. 🙂

On the plus side, I will be playing with a more wider spectrum of Watson + AI related technologies, and will discuss here when I get the chance.

In the meantime here are some blogs I recommend.

I would also recommend to check out the Top 100 people to watch list.

 

 

Treat me like a human.

So one of the main pitfalls in creating a conversational system is assuming that you have to answer everything, no matter how bad it is. In a real life conversation we only go so far.

If you attempt to answer every question the user stops talking and treats the conversation system more like a search engine. So it’s good to force the user to at least give enough context.

One Word

Watson intents generally don’t work great with a single word. To that end create a node with the following condition.

input.text.matches(‘^\S+$’)

This will capture the one word responses. You can then say something like: “Can you explain in more detail what it is you want?“.

Of course you can have single word domain terms. They don’t give you enough context to answer the user, but enough to help the user. For example, lets say you are making a chat bot for a Vet office. So you may set up a node like this:

conv-0919-1

You can then list 2-3 questions the user can click and respond to. For example:

1. Why does my fish call me Bob?
2. How do fish sleep?

If you do need to create a list, try to keep it to four items or under.

Two to Three Words

Two to three words can be enough for Watson to understand. But it’s possible that the person is not asking a question. They could be trying to use it like a search engine, or a statement. You may also want to capture this.

To that end you can use the following condition.

input.text.matches(‘^\S+ \S+$|^\S+ \S+ \S+$’)
AND input.text.matches(‘^((?!\?).)*$’)

This will only capture 2-3 words that do not contain a question mark.

Here is the sample conversation script.

Handling Process Flows

While stepping a user through a process flow, don’t assume that the user will ask random questions. Even if they do, you don’t have to answer them. In real life, we wouldn’t try to answer everything if we are in the middle of something.

We may answer something in context, but we are more likely to get impatient, or ask to stop doing the flow and go back to Q&A.

So when creating your flow, try to keep this in mind. If the user asks something outside of what you expect, ask again only make the answers linkable (as long as there is a limit of answers). For example:

Watson: Do you want to a cat or a dog?
User: How do fishes sleep?
Watson: I don’t understand what you mean. Did you want a cat or a dog?

If the user persists, you can create a break out function. As you do a first pass through your user testing, you can see where you need to expand out. Don’t start off coding as if you expect them to break out everywhere.

 

Do you even need a chat bot?

With everyone rushing out to create bots, I am reminded of the Jurassic Park quote: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”. 

Despite what people will tell you, conversation or chat bots for that matter are not needed for everything.

For example where a simple web form will do, it will out perform against a chat bot. But if that same form could have most of it answered by one question, then a chat bot may be a better solution (or maybe you just need a better form).

Or simply changing your internal processes may negate the need for a chat bot. For example, imagine after analysing your support calls for training the bot you find that over 90% of calls are due to one printer. Do you create the bot, or just replace the printer?

Or if your customer knows your domain language then a search engine or Retrieve and Rank may be a better solution.

IBM normally does all this checking through what we call a Cognitive Value Assessment (CVA). A well done CVA reduces all head aches on projects. Even if you don’t go with IBM, you should realistically examine your business process to determine if you even need a chat bot, and not just jumping on the bandwagon.

Why Conversation?

Most of the chat bots out there are just messaging frameworks, that allow you as a developer to interact with existing messaging systems. How you interpret, talk, react is all handled through code.

Out of the box, where Watson Conversation excels is the ability to take a knowledge domain (ie. Customer) that doesn’t directly match your Domain knowledge. With a handful of questions you can have your conversational bot answering questions it has never seen before.

Conversation also makes it relatively easy to write out your conversational flow (also known as chat-flow & process flow, depending on how long you worked in Watson).

Conversation is meant to piggy back on existing messaging frameworks to build intelligence into them with ease.

The danger of making things easy, is that people skip the theory and go straight to the development. The older people in the audience will remember when Visual Basic (or Lotus Notes for that matter) came out. It became easy to create applications, but most were travesties in UI, maintainability and functionality.

My focus going forward is to cover more the theory end to resolve this. As I am already competing with a number of blogs + videos in relation to Conversation.