Removing the confusion in intents.

While the complexity of building Conversation has reduced for non-developers, one of the areas that people can sometimes struggle is training intents.

When trying to determine how the system performs, it is important to use tried and true methods of validation. If you go by someone just interacting with the system you can end up with what is called “perceived accuracy”. A person may ask three to four questions, and three may fail. Their perception becomes that the system is broken.

Using cross validation allows you to give a better feel of how the system is working. As will a blind / test set. But knowing how the system performs and trying to trying to interpret the results is where it takes practise.

Take this example test report. Each line is where a question is asked, and it determines what the answer should be, versus what answer came back. The X denotes where an answer failed.

report_0707

Unless you are used to doing this analysis full time, it is very hard to see the bigger picture. For example, is DOG_HEALTH the issue, or is it LITTER_HEALTH + BREEDER_GUARANTEE?

You have to manually analyse each of these clusters and determine what changes are required to be made.

Thankfully Sci-Kit makes your life easier with being able to create a confusion matrix. With this you can see how each intent performs against each other. So you end up with something like this:

confusion_matrix

So now you can quickly see what Intents are getting confused with others. So you focus on those to improve your accuracy better. In the example above, DOG_HEALTH and BREEDER_INFORMATION offer the best areas to investigate.

I’ve created a Sample Notebook which demonstrates the above, so you can modify to test your own conversations training.

 

To see the world in a grain of sand…

Just a short blog update to inform everyone that I have moved over to a new role in IBM. I now work in IBM Dubai as a Technical Solutions Manager, assisting with the somewhat recently announced ai Lab.

For me in a sense it means more (relative) free time, as a lot of my time was devoted to travelling previously. But I am just getting ramped up, so apologies again if I still maintain my one post a month. 🙂

On the plus side, I will be playing with a more wider spectrum of Watson + AI related technologies, and will discuss here when I get the chance.

In the meantime here are some blogs I recommend.

I would also recommend to check out the Top 100 people to watch list.

 

 

Watson Conversation just got turned up to 11.

I just wanted to start with why my blog doesn’t get updated that often.

  1. Life is hectic. With continual travel, and a number of other things going on, the chance to be able to sit down for peace and quiet is rare. That may change one way or another soon.
  2. I try to avoid other peoples thunder on a range of issues. For example I recommend keeping an eye on Simon Burns blog in relation to the new features.
  3. My role continues to evolve, at the moment that is one of “Expert Services”, where people like myself go around the world to help our customers run the show themselves. This probably warrants it’s own page, but it is a balance between what is added value to that service, and what everyone should know. (I’m in the latter camp, but I got to eat too).

The biggest reason for lack of updates? Development keep changing things! I’m not the only one that suffers from this. I read a fantastic design patterns document earlier this year by someone in GBS, which went mostly out of date a week or two after I got it.

So it has been the same situation with the latest update to conversation. It is a total game changer. To give you an example of how awesome it is, here is an image of a sample “I want to buy a dog” dialog flow.

pre-slots

Now compare that to the new Slots code. (btw, you may have noticed the UI looks cooler too).

slots

The same functionality took a fraction of the time, and has even more complexity of understanding than the previous version. That single node looks something like this:

slots2

That’s all for the moment, better if you play with it yourself. I have some free time shortly, and I will be posting some outstanding stuff.

 

I love Pandas!

Not the bamboo eating kind (but they are cute too), Python Pandas!

But first… Conversation has a new feature!

Logging! 

You can now download your logs from your conversation workspace into a JSON format. So I thought I’d take this moment to introduce Pandas. Some people love the “Improve” UI, but personally I like being able to easily mold the data to what I need.

First, if you are new to Python, I strongly recommend getting a Python Notebook like Jupyter set up or use IBM Data Science Experience. It makes learning so much easier, and you build your applications like actual documentation.

I have a notebook created so you can play along.

Making a connection

As the feature is just out, the SDK’s don’t have the API for it, so I will be using requests library.

url='https://gateway.watsonplatform.net/conversation/api/v1/workspaces/WORKSPACE_ID/logs?version=2017-04-21'
basic_auth = HTTPBasicAuth(ctx.get('username'), ctx.get('password'))
response = requests.get(url=url, auth=basic_auth)
j = json.loads(response.text)

So we have the whole log now sitting in j but we want to make a dataframe. Before we do that however, let’s talk about log analysis and the fields you need. There are three areas we want to analyse in logs.

Quantitive – These are fixed metrics, like number of users, response times, common intents, etc.

Qualitative – This is analysing how the end user is speaking, and how the system interpreted and responded. Some examples would be where the answer returned may give the wrong impression to the end user, or users ask things out of expected areas.

Debugging – This is really looking for coding issues with your conversation tree.

So on to the fields that cover these areas. These are all contained in j['response'].

Field Usage Description
input.text Qualitative This is what the user or the application typed in.
intents[] Qualitative This tells you the primary intent for the users question. You should capture the intent and confidence into columns. If the value is [] then means it was irrelevant.
entities[] Quantitive The entities found in relation to the call. With this and intents though, it’s important to understand that the application can override these values.
output.text[] Qualitative This is the response shown to the user (or application).
output.log_messages Debugging Capturing this field is handy to look for coding issues within your conversation tree. SPEL errors show up here if they happen.
output.nodes_visited Debugging
Qualitive
This can be used to see how a progression through a tree happens
context.conversation_id All Use this to group users conversation together. In some solutions however, one pass calls are sometimes done mid conversation. So if you do this, you need to factor that in.
context.system.branch_exited Debugging This tells you if your conversation left a branch and returned to root.
context.system.branch_exited_reason Debugging If branch.exited is true then this will tell the why. completed means that the branch found a matching node, and finished. fallback means that it could not find a matching node, so it jumps back to root to find the match.
context.??? All You may have context variables you want to capture. You can either do these individually, or code to remove conversation objects and grab what remains
request_timestamp Quantitive
Qualitative
When conversation received the users response.
response_timestamp Quantitive
Qualitative
When conversation responded to the user. You can do a delta to see if there are conversation performance issues, but generally keep one of the timestamp fields for analysis.

 

So we create a row array, and fill it with dict objects of the columns we want to capture. For clarity of the blog post, the sample code below

import pandas as pd
rows = []

# for object in Json Logs array.
for o in j['logs']:
    row = {}
 
    # Let's shorthand the response object.
    r = o['response']
 
    row['conversation_id'] = r['context']['conversation_id']
 
    # We need to check the fields exist before we read them. 
    if 'text' in r['input']: row['Input'] = r['input']['text']
    if 'text' in r['output']:row['Output'] = ' '.join(r['output']['text'])
 
    # Again we need to check it is not an Irrelevant response. 
    if len(r['intents']) > 0:
        row['Confidence'] = r['intents'][0]['confidence']
        row['Intent'] = r['intents'][0]['intent']

    rows.append(row)

# Build the dataframe. 
df = pd.DataFrame(rows,columns=['conversation_id','Input','Output','Intent','Confidence'])
df = df.fillna('')

# Display the dataframe. 
df

When this is run, all going well you end up with something like this:

report1-1804

The notebook has a better report, and is also sorted so it is actually readable.

report2-1804

Once you have everything you need in the dataframe, you can manipulate it very fast and easy. For example, let’s say you want to get a count of the intents found.

# Get the counts.
q_df = df.groupby('Intent').count()

# Remove all fields except conversation_id and intents. 
q_df = q_df.drop(['request TS', 'response TS', 'User Input', 'Output', 'Confidence', 'Exit Reason', 'Logging'],axis=1)

# Rename the conversation_id field to "Count".
q_df.columns = ['Count']

# Sort and display. 
q_df = q_df.sort_values(['Count'], ascending=[False])
q_df

This creates this:

report3-1804

The Jupyter notebook also allows for visualisation of data as well. Although I haven’t put any in the sample notebook.

I have a dream…

Following on from Speech to Text, let’s jump over to Text to Speech. Similar to conversation, what can make or break the system is the tone and personality you build into the system.

Developers tend to think about the coding, and not the user experience so much.

To give an example, let’s take a piece of a very famous speech from MLK. Small sample so it doesn’t take all day:

I still have a dream. It is a dream deeply rooted in the American dream.

I have a dream that one day this nation will rise up and live out the true meaning of its creed: “We hold these truths to be self-evident, that all men are created equal.”

Let’s listen to Watson as it directly translates.

It sounds like how I act when I am reading a script. 🙂

Now lets listen to MLK.

You can feel the emotion behind it. The pauses and emphasis adds more meaning to it. Thankfully Watson supports SSML, which allows you to mimic the speech.

For this example I only used two tags. The first was <parsody> which allows Watson to have the same speaking speed as MLK. The other tag was <break> which allows me to make those dramatic pauses.

Using Audacity I was able to put the generated speech against the MLK speech. Then selecting the pause areas, I can quickly see the pause lengths.

audicity

I finally ended up with this:

Audacity also allows you to overlay audio, to get a feel to how it would sound if there were crowds listening.

The final script ends up like this:

<prosody rate="x-slow">I still have a dream.</prosody>
<break time="1660ms"></break>
<prosody rate="slow">It is a dream deeply rooted in the American dream.</prosody>
<break time="500ms"></break>
<prosody rate="slow">I have a dream</prosody>
<break time="1490ms"></break>
<prosody rate="x-slow">that one day</prosody>
<break time="1480ms"></break>
<prosody rate="slow">this nation <prosody rate="x-slow">will </prosody>ryeyes up</prosody>
<break time="1798ms"></break>
<prosody rate="slow">and live out the true meaning of its creed:</prosody>
<break time="362ms"></break>
<prosody rate="slow">"We hold these truths to be self-evident,</prosody>
<break time="594ms"></break>
<prosody rate="slow">that all men are created equal."</prosody>

I have zipped up all the files for download, just in case you are having issues running the audio.

IHaveADream.zip

In closing, if you plan to build a conversational system that speaks to the end user, you also need skills in talking to people, just not being able to write.

Speech to Text and Conversation

I thought I would take a moment to play with Speech to Text and a utility that was released a few months ago.

The Speech to Text Utils allows you to train S2T using your existing conversational system. To give a quick demo, I got my son to ask about buying a puppy.

I set up some quick Python code to print out results:

import json
from watson_developer_cloud import SpeechToTextV1

# ctx is Service credentials copied from S2T Service. 

s2t = SpeechToTextV1(
 username=ctx.get('username'),
 password=ctx.get('password')
)

def wav(filename, **kwargs):
  with open(filename,'rb') as wav:
    response = s2t.recognize(wav, content_type='audio/wav', **kwargs)

if len(response['results']) > 0: 
  return response['results'][0]['alternatives'][0]['transcript']
else:
  return '???';

So testing the audio with the following code:

wav_file = 'p4u-example1.wav'
print('Broadband: {}'.format(wav(wav_file)))
print('NarrowBand: {}'.format(wav(wav_file,model='en-US_NarrowbandModel')))

Gets these results:

Broadband: can I get a puppy 
NarrowBand: can I get a puppy

Of course the recording is crystal clear, which is why such a good result. So I added some ambient noises from SoundJay to the background. So now it sounds like it is in a subway.

Running the code above again get’s these results.

Broadband: Greg it appropriate 
Narrowband: can I get a phone

Ouch!

Utils to the rescue!

So the purpose of asking about a puppy is that I have a sample conversation system that is about buying a dog. Using that conversation file I did the following.

1: Installed Speech to Text Utils.

2: Before you begin you need to set up the connection to your S2T service (using service credentials).

watson-speech-to-text-utils set-credentials

It will walk you through the username and password.

3: Once that was set up, I then tell it to create a customisation.

watson-speech-to-text-utils corpus-from-workspace puppies4you.json

You need to map to a particular model. For testing, I attached it to en-US_NarrowbandModel and en-US_BroadbandModel.

4: Once it was run, I get the ID numbers for the customisations.

watson-speech-to-text-utils customization-list

Once I have the ID’s I try the audio again:

wav_file='p4u-example2.wav'
print('Broadband: {}'.format(wav(wav_file,customization_id='beeebd80-2420-11e7-8f1c-176db802f8de',timestamps=True)))
print('Narrowband: {}'.format(wav(wav_file,model='en-US_NarrowbandModel',customization_id='a9f80490-241b-11e7-8f1c-176db802f8de')))

This outputs:

Broadband: can I get a puppy 
Narrowband: can I get a phone

So the broadband now works. Narrowband is likely the quality is too poor to work with. There is also more specialised language models for children done by others to cope with this.

One swallow does not make a summer.

So this is one example, of one phrase. Really for testing, you should test the whole model. From a demonstration from development, it was able to increase a S2T model accuracy from around 50% to over 80%.

 

 

Watson V3 Certification

ibm-certified-application-developer-watson-v3-certificationSo I got my Watson V3 Certification a week or so ago, and the badge just arrived yesterday.

I sat the mock exam without studying and passed. So I thought I’d try the real exam, and passed that too.

Overall if you have been working in the Watson group for 3+ years, where your job role is to have medium to expert knowledge of all (non-Health) Watson products, then you are probably going to find the exam OK to pass.

For people who haven’t, it’s not going to be easy. I strongly recommend following the study guide on the test preparation certification page if you plan to get this.

My only quibbles on the exam is that the technology changes a lot.

For example, all the design patterns for coding conversation before December last year are not that relevant any more, and will likely change again soon. (Which is part reason for lack of updates on the blog, the other being laziness 🙂 )

So you need to know the current active technologies even if they are going away. Plus there will probably be a V4 exam in 6 months or so time.

I’d also like to see more focused certifications for some parts of the Watson Developer Cloud. For example, being an expert at Discovery Service, doesn’t make you an expert of Conversation and vise-versa.

Watson in the black and white room.

Let’s talk about the recent changes of how Watson determines it’s confidence. It seems to be a hot topic at the moment, and probably not best understood.

 

Before: 

Imagine that you are Watson, you are in a room with no doors or windows. You have learned everything about the world from Wikipedia. There is two objects, a cube and a pyramid in front of you.

Now if someone tells you a question, you can use Wikipedia to try and figure out what the answer is, but you can only point to one of the two objects in the room. There is no other answer.

So they may ask “Which one is an Orange?”. You may think that a cube is similar to the Discovery Cube in Orange county. You can also see that a food pyramid has an orange in it. Neither is a direct fit, but you only have two answers.

So you respond: “I am 51% sure that it is this pyramid”

After:

Now you are in the same room, but this time there is a window that shows you the outside world.

You are asked the same question. You still come to the same conclusion, but because you can see the outside world you know that the answer is not in the room.

This time you respond: “I am confident that neither of these objects are an Orange”

But what about the lower confidence?

The first thing you notice is that the confidence is not as high as before. This in itself is not a bad thing. It is the relationship of the answer to the others found. For example:

conv060217-2

You can see in this example the first answer is 72%, while the next one is 70%. So it is either a compound question, or you need training to differentiate between the two intents that are close together. In the previous version you could not see this.

The main point to take from this, the confidence hasn’t actually changed. You are just finally seeing the real confidence.

How does this impact me?

First Watson would always ignore an intent if the confidence is <0.2. But how the confidences were previously determined, it was rare that you would hit this condition.

Now this is possible.

Also if you have written conditions to determine the real confidence boundary (detailed here), you need to determine the correct boundaries.

Lastly if no intent is matched, the you get an empty intents list.

In closing

Although the new feature is considerably better, always test before you deploy!

As for the title reference: 

Compound Questions

One problem that is tricky to solve is if a user has asked two questions. Previously some solutions were to look for conjunctions (“and”) or question marks. Then try to guess if it is a question.

But you could end up with a question like “Has my dog been around other dogs and other people?”. This is clearly one question.

With the new conversation feature of “Absolute Confidences”, it is now possible to detect this. Earlier versions of conversation would have all intents would add up to 1.0.

Now each confidence has it’s own value. Taking the earlier example, if we map the confidences to a chart, we get:

conv060217-1

Visually we can see that the first and second intent are not related. The next sentence “Has my dog been around other dogs and is it certified?” is two questions. When we chart this we see:

conv060217-2

Very easy to see that there are two questions. So how to do it in your code?

You can use a clustering technique called K-means. This will cluster your data into sets of ‘K’. In this case we have “important intents” and “unimportant intents”. Two groups, means K = 2.

For this demonstration I am going to use Python, but K-means exists in a number of languages. I have a sample of the full code, and example conversation workspace. So for this I will only show code snippets.

Walkthrough

Conversation request needs to set alternate_intents to true. So that you can get access to the top 10 intents.

Once you get your response back, convert your confidence list into an array.

intent_confidences = list(o['confidence'] for o in response['intents'])

Next the main method will return True if it thinks it is a compound question. It requires numpy + scipy.

def compoundQuestion(intents):
    v = np.array(intents)
    codebook, _ = kmeans(v,2)
    ci, _ = vq(v,codebook)

    # We want to make everything in the top bucket to have a value of 1.
    if ci[0] == 0: ci = 1-ci
    if sum(ci) == 2: return True
    return False

The first three lines will take the array of confidences and generate two centroids. A centroid is the mean of each cluster found. It will then group each of the confidences into one of the two centroids.

Once it runs ci will look something like this: [ 0, 0, 1, 1, 1, 1, 1, 1, 1, 1 ] . This however can be the reverse.

The first value is the first intent. So if the first value is 0 we invert the array and then add up all the values:

[ 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ] => 2 

If we get a value of 2, then the first two intents are related to the question that was entered. Any other value, then we only have one question, or potentially more than two important intents.

Example output from the code:

Has my dog been around other dogs and other people?
> Single intent: DOG_SOCIALISATION (0.9876400232315063)

Has my dog been around others dogs and is it certified?
> This might be a compound question. Intent 1: DOG_SOCIALISATION (0.7363447546958923). Intent 2: DOG_CERTIFICATION (0.6973928809165955).

Has my dog been around other dogs? Has it been around other people?
> Single intent: DOG_SOCIALISATION (0.992318868637085)

Do I need to get shots for the puppy and deworm it?
> This might be a compound question. Intent 1: DOG_VACCINATIONS (0.832768440246582). Intent 2: DOG_DEWORMING (0.49955931305885315).

Of course you still need to write code to take action on both intents, but this might make it a bit easier to handle compound questions.

Here is the sample code and workspace.

Improving your Intents with Entities.

You might notice that when you update your entities that Conversation says “Watson is training on your recent changes”. What is happening is that Intents and Entities work together in the NLU engine.

So it is possible to build entities that can be referenced within your intents. Something similar to how Dialog entities. Work.

For this example I am going to use two entities.

  • ENTITY_FOODSTUFF
  • ENTITY_PETS

In my training questions I create the following example.

conv-150118-1

The #FoodStore question list is exactly the same, only the entity name is changed.

Next up create your entities. It doesn’t matter what the entity itself is called, only that if has one value that mentions the entity identifiers above. I have @Entity set to the same as the value for clarity.

conv-150118-2

conv-150118-3

 

“What is the point?” you might ask? Well you will notice that both entities have a value of “fish”.

When I ask “I want to get a fish” I get the following back.

  • FoodStore confidence: 0.5947581078492985
  • Petshop confidence: 0.4052418921507014

So Watson is not sure, as both intents could be the right answer. This is what you would expect.

Now after we delete the “fish” value from both entities, I then add the same training question “I want a fish” to both intents. After Watson has trained and I ask “I want to get a fish”, you get the following back.

  • Petshop confidence: 0.9754140796608233
  • FoodStore confidence: 0.02458592033917674

Oh dear, now it appears to be more confident then it should be. So entities can help in making questions ambiguous if training is not helping.

This is not without it’s limitations.

Entities are fixed keywords, and the intents will treat them as such. So while it will find “fish” in our example, it won’t recognise “fishes” unless it’s explicitly stated in the entity.

Another thing to be wary of is that all entities are used in the intents. So if a question mentioned “toast”, then @ENTITY_FOODSTUFF becomes a candidate in trying to determine which intent is correct.

The last thing to be aware of is that training questions take priority over entities when it comes to determining what is correct.

If we were to add a training question “I want fishes” to the first example. Then ask the earlier question, you would find that foodstore now takes priority. If we add “I want fishes” to both intents and ask the question “I want to get a fish”, you will get the same results as if the entities never had the word “fish” in it.

This can be handy for forcing common spelling mistakes that may not be picked up, or clearly defined domain keywords a user may enter (eg. product ID)