Manufacturing Intent

Let me start this article with a warning:  Manufacturing questions causes more problems than it solves.

Sure the documentation, and many videos say the reverse. But they tend to give examples that have a narrow scope.

Take the car demo for example. It works because there is a common domain language that everyone who uses a car knows. For someone who has never seen a car before, won’t understand what a “window wiper” is, but they may say something like “I can’t see out the window, because it is raining”.

This is why when building your conversation system, it is important to get questions from people who actually use the system, but don’t know the content. They tend not to know how to ask a question to get the answer.

But there are times when it can’t be avoided. For example, you might be creating a system that has no end users yet. In this case, manufacturing questions can help bootstrap the system.

There are some things to be aware of.

Manual creation.

This is actually very hard, even for the experienced. Here are the things you need to be aware of.

Your education and culture will shape what you write.

You can’t avoid it. Even if you are aware of this, you will fall back into the same patterns as you progress through creating questions. It’s not easy to see until you have a large sample. Sorting can give you a quick glance, while a bag of words makes it more evident.

If you know the content, you will write what you know.

Again, having knowledge of the systems answers will have you writing domain language in the questions. You will use terms that define the system, rather than describing what it does.

If you don’t know the content, use user stories.

If you manage to get someone who could be a representative user, be careful in how you ask them to write questions. If they don’t fully understand what you ask, they will use terms as keywords, rather than their underlying meaning.

Let’s compare two user stories:

  • “Ask questions about using the window wipers.”
  • “It is raining outside while you are driving, and it is getting harder to see. How might you ask the car to help you?”

With the first example, you will find that people will use “window wipers”, “wipers”, “window” frequently. Most of the questions will be about switching it on/off.

With the second example, you may end up seeing questions like this.

  • Switch on the windshield wipers.
  • Activate the windscreen wipers.
  • Is my rain sensor activated?
  • Please clear the windows.

Your final application will shape the questions as well.

If you have your question creation team working on a desktop machine, they are going to create questions which won’t be the same as someone typing on mobile, or talking to a robot.

The internet can be your friend.

Looking for similar questions online in forums can help you in seeing terms that people may use. For example, all these mean the same thing: “NCT”, “MOT”, “Smog Test”, “RWC”, “WoF”, “COF”.

But those are meaningless to people if they are in different countries.

Automated Creation

A lot of what I have seen in automation tends not to fare much better. If it did, we wouldn’t need people to write questions. ūüôā

One technique is to try and create a few questions from existing questions. Again I should stress, this is generally a bad idea, and this example doesn’t work well, but might give you something to build on.

Take this example.

  • Can my child purchase a puppy?
  • Are children allowed to buy dogs?

From a manual view we can see the intent is to allow minors to buy. Going over to the code.

For this I am using Spacy, which can check the similarity of each word against each other. For example.

import spacy
nlp = spacy.load('en')

dog = nlp(u'dog')
puppy = nlp(u'puppy')
print( dog.similarity(puppy))

Will output: 0.760806754875

The higher the number, the closer the words to each other. By setting a threshold on the value, you can reduce it to the important words. Setting a threshold of 0.7, we get:

Screen Shot 2017-09-09 at 18.31.18

Playing with larger questions you will find that certain parts of speech (POS) are more noise. So you can drop the following to remove possible noise.

  • DET =¬†determiner
  • PART =¬†particle
  • CCONJ = conjunction
  • ADP =¬†adposition

Now that you have reduced it to the main terms, you can build a synonym off of these, like so:

dog = nlp('dog')
word = nlp.vocab[dog.text]
sym = sorted(word.vocab, 
             key=lambda w: word.similarity(w), 
             reverse=True
)
print('\n'.join([w.orth_ for w in sym[:10]]))

 

Which will print out the following:

  • dog
  • DOG
  • Dog
  • dogs
  • DOGS
  • Dogs
  • puppy
  • Puppy
  • PUPPY
  • pet

As you can see, a lot of repetition. You can remove duplicates. Also be wary to set the upper bound of the sym object when reading.

So after you generate a group of sample synonyms, you end up with something like this.

Screen Shot 2017-09-09 at 18.50.07

Now it’s a simple matter of just generating a random set of questions from this table. You end up with something like:

  • need children purchasing dogs
  • can kids buying puppy
  • may child purchases dog
  • will children purchase dogs
  • will kids buy pets
  • make children cheap puppies
  • need children purchase puppies

As you can see, they are pretty bad. Not because of the word salad, but that you have a very narrow scope of what can be answered. But it can give you enough to have your intent trigger.

You can also mitigate this by creating a tensor of a number of similar questions, n-grams instead of single words, a custom domain dictionary and increasing your dictionary terms.

At the end of the day though, they are still going to be manufactured.

Anaphora? I hardly knew her.

One of common requests for conversation is being able to understand the running topic of a conversation.

For example:

USER: Can I feed my goldfish peas?

WATSON: Goldfish love peas, but make sure to remove the shells!

USER: Should I boil them first?

The second response “them” is called an “anaphora”. The “them” refers to the peas. So you can’t answer the question without first knowing the previous question.

On the face of it, it looks easy. But you have “goldfish”, ‘peas”, ‘shells” which could potentially be the¬†reference, and no one wants to boil their goldfish!

So the tricky part is determining the topic. There are a number of ways to approach this.

Entities

The most obvious way is to determine what entity the person mentioned, and store that for use later. This works well if the user actually mentions an entity to work with. However in a general conversation, the subject of the conversation may not always be by the person who asks the question.

Intents

When asking a question and determining the intent, it may not always be that an entity can be involved. So this has limited help in this regards.

That said, there are certain cases where intents have been used with a context in mind. So it can be easily done by creating a suffix to the intent. For example:

    #FEEDING_FISH_e_Peas

In this case we believe that peas is a common entity that has a relationship to the intent of Feeding Fish. For coding convention we use “_e_” to denote that the following piece of the intent name is an entity identifier.

At the application layer, you can do a regex on the intent name “_e_(.*?)$” for the group 1 result. If it is not blank, store it in a context variable.

 

Regular Expressions

Like before, you can use regular expressions to capture an earlier pattern to store it at a later point.

One way to approach this is have a gateway node that activates before working through the intent tree. Something like this:

example1507

The downside to this is that there is a level of complexity to maintain in a complex regular expression.

You can make at least maintaining a little easier by setting the primary condition check as “true” and then individual checks in the node itself.

example1507a

Answer Units

An answer unit is the text response you give back to the end user. Once you have responded with an answer, you have created a lot of context within that answer that the user may follow up on. For example:

example1507b

Even with the context markers of the answer, the end user may never pick up on them. So it is very important to craft your answer that will drive the user to the context you have selected.

NLU

The last option is to pass the questions through NLU. This should be able to give you the key terms and phrases to store as context. As well as create knowledge graph information.

I have the context. Now what?

When the user gives a question that does not have context, you will normally get back low confidence intents, or irrelevant response.

If you are using Intent based context, you can check the returning intents for a similar context to what you have stored. This also allows you to discard unrelated intents. The results from this are not always stellar, but offer a cheaper one time call.

The other option you can take is to preload the question that was asked and send it back. For example:

  PEAS !! Can I boil them first?

You can use the !! as a marker that your question is trying to determine context. Handy if you need to review the logs later.

As time passes…

So as the conversation presses on, what the person is talking about can move away from the original context, but it may still remain the dominant. One solution is to build a weighted context list.

For example:

"entity_list" : "peas, food, fish"

In this case we maintain the last three context found. As a new context is found, it uses LIFO to maintain the list. Of course this means more API calls, which can cost money.

Lowering calls on the tree.

Another option in this to create a poor mans knowledge graph. Let’s say the last two context were “bowl” and “peas”. Rather then creating multiple context nodes, you can build a tree which can be passed back to the application layer.

"entity" : "peas->food->care->fish"
...
"entity" : "bowl->care->fish"

You can use something like Tinkerpop to create a knowledge graph (IBM Graph in Bluemix is based on this).
example1507c

Now when a low confidence question is found, you can use “bowl”, “peas” to disambiguate, or use “care” as the common entity to find the answer.

Talk… like.. a… millennial…

One more common form of anaphora that you have to deal with, is how people talk on instant messaging systems. The question is often split across multiple lines.

example1507d

Normal conversation systems take one entry, and give one response. So this just wreaks their AI head. Because not only do you need to know where the real question stops, but where the next one starts.

One way to approach this is capture the average timing mechanism between each entry of the user. You can do this by passing the timestamps from the client to the backend. The backend can then build an average of how the user talks. This needs to be done at the application layer.

Sadly no samples this time, but it should give you some insights into how context is worked with a conversation system.

Removing the confusion in intents.

While the complexity of building Conversation has reduced for non-developers, one of the areas that people can sometimes struggle is training intents.

When trying to determine how the system performs, it is important to use tried and true methods of validation. If you go by someone just interacting with the system you can end up with what is called “perceived accuracy”. A person may ask three to four questions, and three may fail. Their perception becomes that the system is broken.

Using cross validation allows you to give a better feel of how the system is working. As will a blind / test set. But knowing how the system performs and trying to trying to interpret the results is where it takes practise.

Take this example test report. Each line is where a question is asked, and it determines what the answer should be, versus what answer came back. The X denotes where an answer failed.

report_0707

Unless you are used to doing this analysis full time, it is very hard to see the bigger picture. For example, is DOG_HEALTH the issue, or is it LITTER_HEALTH + BREEDER_GUARANTEE?

You have to manually analyse each of these clusters and determine what changes are required to be made.

Thankfully Sci-Kit makes your life easier with being able to create a confusion matrix. With this you can see how each intent performs against each other. So you end up with something like this:

confusion_matrix

So now you can quickly see what Intents are getting confused with others. So you focus on those to improve your accuracy better. In the example above, DOG_HEALTH and BREEDER_INFORMATION offer the best areas to investigate.

I’ve created a Sample Notebook which demonstrates the above, so you can modify to test your own conversations training.

 

To see the world in a grain of sand…

Just a short blog update to inform everyone that I have moved over to a new role in IBM. I now work in IBM Dubai as a Technical Solutions Manager, assisting with the somewhat recently announced ai Lab.

For me in a sense it means more (relative) free time, as a lot of my time was devoted to travelling previously. But I am just getting ramped up, so apologies again if I still maintain my one post a month. ūüôā

On the plus side, I will be playing with a more wider spectrum of Watson + AI related technologies, and will discuss here when I get the chance.

In the meantime here are some blogs I recommend.

I would also recommend to check out the Top 100 people to watch list.

 

 

Watson Conversation just got turned up to 11.

I just wanted to start with why my blog doesn’t get updated that often.

  1. Life is hectic. With continual travel, and a number of other things going on, the chance to be able to sit down for peace and quiet is rare. That may change one way or another soon.
  2. I try to avoid other peoples thunder on a range of issues. For example I recommend keeping an eye on Simon Burns blog in relation to the new features.
  3. My role continues to evolve, at the moment that is one of “Expert Services”, where people like myself go around the world to help our customers run the show themselves. This probably warrants it’s own page, but it is a balance between what is added value to that service, and what everyone should know. (I’m in the latter camp, but I got to eat too).

The biggest reason for lack of updates?¬†Development keep changing things!¬†I’m not the only one that suffers from this. I read a fantastic design patterns document earlier this year by someone in GBS, which went mostly out of date a week or two after I got it.

So it has been the same situation with the latest update to conversation. It is a total game changer. To give you an example of how awesome it is, here is an image of a sample “I want to buy a dog” dialog flow.

pre-slots

Now compare that to the new Slots code. (btw, you may have noticed the UI looks cooler too).

slots

The same functionality took a fraction of the time, and has even more complexity of understanding than the previous version. That single node looks something like this:

slots2

That’s all for the moment, better if you play with it yourself. I have some free time shortly, and I will be posting some outstanding stuff.

 

I love Pandas!

Not the bamboo eating kind (but they are cute too), Python Pandas!

But first… Conversation has a new feature!

Logging! 

You can now download your logs from your conversation workspace into a JSON format. So I thought I’d take this moment to introduce Pandas. Some people love the “Improve” UI, but personally I like being able to easily mold the data to what I need.

First, if you are new to Python, I strongly recommend getting a Python Notebook like Jupyter set up or use IBM Data Science Experience. It makes learning so much easier, and you build your applications like actual documentation.

I have a notebook created so you can play along.

Making a connection

As the feature is just out, the SDK’s don’t have the API for it, so I will be using requests library.

url='https://gateway.watsonplatform.net/conversation/api/v1/workspaces/WORKSPACE_ID/logs?version=2017-04-21'
basic_auth = HTTPBasicAuth(ctx.get('username'), ctx.get('password'))
response = requests.get(url=url, auth=basic_auth)
j = json.loads(response.text)

So we have the whole log now sitting in j but we want to make a dataframe. Before we do that however, let’s talk about log analysis and the fields you need. There are three areas we want to analyse in logs.

Quantitive – These are fixed metrics, like number of users, response times, common intents, etc.

Qualitative – This is analysing how the end user is speaking, and how the system interpreted and responded. Some examples would be where the answer returned may give the wrong impression to the end user, or users ask things out of expected areas.

Debugging – This is really looking for coding issues with your conversation tree.

So on to the fields that cover these areas. These are all contained in j['response'].

Field Usage Description
input.text Qualitative This is what the user or the application typed in.
intents[] Qualitative This tells you the primary intent for the users question. You should capture the intent and confidence into columns. If the value is [] then means it was irrelevant.
entities[] Quantitive The entities found in relation to the call. With this and intents though, it’s important to understand that the application can override these values.
output.text[] Qualitative This is the response shown to the user (or application).
output.log_messages Debugging Capturing this field is handy to look for coding issues within your conversation tree. SPEL errors show up here if they happen.
output.nodes_visited Debugging
Qualitive
This can be used to see how a progression through a tree happens
context.conversation_id All Use this to group users conversation together. In some solutions however, one pass calls are sometimes done mid conversation. So if you do this, you need to factor that in.
context.system.branch_exited Debugging This tells you if your conversation left a branch and returned to root.
context.system.branch_exited_reason Debugging If branch.exited is true then this will tell the why. completed means that the branch found a matching node, and finished. fallback means that it could not find a matching node, so it jumps back to root to find the match.
context.??? All You may have context variables you want to capture. You can either do these individually, or code to remove conversation objects and grab what remains
request_timestamp Quantitive
Qualitative
When conversation received the users response.
response_timestamp Quantitive
Qualitative
When conversation responded to the user. You can do a delta to see if there are conversation performance issues, but generally keep one of the timestamp fields for analysis.

 

So we create a row array, and fill it with dict objects of the columns we want to capture. For clarity of the blog post, the sample code below

import pandas as pd
rows = []

# for object in Json Logs array.
for o in j['logs']:
    row = {}
 
    # Let's shorthand the response object.
    r = o['response']
 
    row['conversation_id'] = r['context']['conversation_id']
 
    # We need to check the fields exist before we read them. 
    if 'text' in r['input']: row['Input'] = r['input']['text']
    if 'text' in r['output']:row['Output'] = ' '.join(r['output']['text'])
 
    # Again we need to check it is not an Irrelevant response. 
    if len(r['intents']) > 0:
        row['Confidence'] = r['intents'][0]['confidence']
        row['Intent'] = r['intents'][0]['intent']

    rows.append(row)

# Build the dataframe. 
df = pd.DataFrame(rows,columns=['conversation_id','Input','Output','Intent','Confidence'])
df = df.fillna('')

# Display the dataframe. 
df

When this is run, all going well you end up with something like this:

report1-1804

The notebook has a better report, and is also sorted so it is actually readable.

report2-1804

Once you have everything you need in the dataframe, you can manipulate it very fast and easy. For example, let’s say you want to get a count of the intents found.

# Get the counts.
q_df = df.groupby('Intent').count()

# Remove all fields except conversation_id and intents. 
q_df = q_df.drop(['request TS', 'response TS', 'User Input', 'Output', 'Confidence', 'Exit Reason', 'Logging'],axis=1)

# Rename the conversation_id field to "Count".
q_df.columns = ['Count']

# Sort and display. 
q_df = q_df.sort_values(['Count'], ascending=[False])
q_df

This creates this:

report3-1804

The Jupyter notebook also allows for visualisation of data as well. Although I haven’t put any in the sample notebook.

I have a dream…

Following on from Speech to Text, let’s jump over to Text to Speech. Similar to conversation, what can make or break the system is the tone and personality you build into the system.

Developers tend to think about the coding, and not the user experience so much.

To give an example, let’s take a piece of a very famous speech from MLK. Small sample so it doesn’t take all day:

I still have a dream. It is a dream deeply rooted in the American dream.

I have a dream that one day this nation will rise up and live out the true meaning of its creed: “We hold these truths to be self-evident, that all men are created equal.”

Let’s listen to Watson as it directly translates.

It sounds like how I act when I am reading a script. ūüôā

Now lets listen to MLK.

You can feel the emotion behind it. The pauses and emphasis adds more meaning to it. Thankfully Watson supports SSML, which allows you to mimic the speech.

For this example I only used two tags. The first was <parsody> which allows Watson to have the same speaking speed as MLK. The other tag was <break> which allows me to make those dramatic pauses.

Using Audacity I was able to put the generated speech against the MLK speech. Then selecting the pause areas, I can quickly see the pause lengths.

audicity

I finally ended up with this:

Audacity also allows you to overlay audio, to get a feel to how it would sound if there were crowds listening.

The final script ends up like this:

<prosody rate="x-slow">I still have a dream.</prosody>
<break time="1660ms"></break>
<prosody rate="slow">It is a dream deeply rooted in the American dream.</prosody>
<break time="500ms"></break>
<prosody rate="slow">I have a dream</prosody>
<break time="1490ms"></break>
<prosody rate="x-slow">that one day</prosody>
<break time="1480ms"></break>
<prosody rate="slow">this nation <prosody rate="x-slow">will </prosody>ryeyes up</prosody>
<break time="1798ms"></break>
<prosody rate="slow">and live out the true meaning of its creed:</prosody>
<break time="362ms"></break>
<prosody rate="slow">"We hold these truths to be self-evident,</prosody>
<break time="594ms"></break>
<prosody rate="slow">that all men are created equal."</prosody>

I have zipped up all the files for download, just in case you are having issues running the audio.

IHaveADream.zip

In closing, if you plan to build a conversational system that speaks to the end user, you also need skills in talking to people, just not being able to write.

Speech to Text and Conversation

I thought I would take a moment to play with Speech to Text and a utility that was released a few months ago.

The Speech to Text Utils allows you to train S2T using your existing conversational system. To give a quick demo, I got my son to ask about buying a puppy.

I set up some quick Python code to print out results:

import json
from watson_developer_cloud import SpeechToTextV1

# ctx is Service credentials copied from S2T Service. 

s2t = SpeechToTextV1(
 username=ctx.get('username'),
 password=ctx.get('password')
)

def wav(filename, **kwargs):
  with open(filename,'rb') as wav:
    response = s2t.recognize(wav, content_type='audio/wav', **kwargs)

if len(response['results']) > 0: 
  return response['results'][0]['alternatives'][0]['transcript']
else:
  return '???';

So testing the audio with the following code:

wav_file = 'p4u-example1.wav'
print('Broadband: {}'.format(wav(wav_file)))
print('NarrowBand: {}'.format(wav(wav_file,model='en-US_NarrowbandModel')))

Gets these results:

Broadband: can I get a puppy 
NarrowBand: can I get a puppy

Of course the recording is crystal clear, which is why such a good result. So I added some ambient noises from SoundJay to the background. So now it sounds like it is in a subway.

Running the code above again get’s these results.

Broadband: Greg it appropriate 
Narrowband: can I get a phone

Ouch!

Utils to the rescue!

So the purpose of asking about a puppy is that I have a sample conversation system that is about buying a dog. Using that conversation file I did the following.

1: Installed Speech to Text Utils.

2: Before you begin you need to set up the connection to your S2T service (using service credentials).

watson-speech-to-text-utils set-credentials

It will walk you through the username and password.

3: Once that was set up, I then tell it to create a customisation.

watson-speech-to-text-utils corpus-from-workspace puppies4you.json

You need to map to a particular model. For testing, I attached it to en-US_NarrowbandModel and en-US_BroadbandModel.

4: Once it was run, I get the ID numbers for the customisations.

watson-speech-to-text-utils customization-list

Once I have the ID’s I try the audio again:

wav_file='p4u-example2.wav'
print('Broadband: {}'.format(wav(wav_file,customization_id='beeebd80-2420-11e7-8f1c-176db802f8de',timestamps=True)))
print('Narrowband: {}'.format(wav(wav_file,model='en-US_NarrowbandModel',customization_id='a9f80490-241b-11e7-8f1c-176db802f8de')))

This outputs:

Broadband: can I get a puppy 
Narrowband: can I get a phone

So the broadband now works. Narrowband is likely the quality is too poor to work with. There is also more specialised language models for children done by others to cope with this.

One swallow does not make a summer.

So this is one example, of one phrase. Really for testing, you should test the whole model. From a demonstration from development, it was able to increase a S2T model accuracy from around 50% to over 80%.

 

 

Watson V3 Certification

ibm-certified-application-developer-watson-v3-certificationSo I got my Watson V3 Certification a week or so ago, and the badge just arrived yesterday.

I sat the mock exam without studying and passed. So I thought I’d try the real exam, and passed that too.

Overall if you have been working in the Watson group for 3+ years, where your job role is to have medium to expert knowledge of all (non-Health) Watson products, then you are probably going to find the exam OK to pass.

For people who haven’t, it’s not going to be easy. I strongly recommend following the study¬†guide on the test preparation certification page if you plan to get this.

My only quibbles on the exam is that the technology changes a lot.

For example, all the design patterns for coding conversation before December last year are not that relevant any more, and will likely change again soon. (Which is part reason for lack of updates on the blog, the other being laziness ūüôā )

So you need to know the current active technologies even if they are going away. Plus there will probably be a V4 exam in 6 months or so time.

I’d also like to see more focused certifications for some parts of the Watson Developer Cloud. For example, being an expert at Discovery Service, doesn’t make you an expert of Conversation and vise-versa.

Watson in the black and white room.

Let’s talk about the recent changes of how Watson determines it’s confidence. It seems to be a hot topic at the moment, and probably not best understood.

 

Before: 

Imagine that you are Watson, you are in a room with no doors or windows. You have learned everything about the world from Wikipedia. There is two objects, a cube and a pyramid in front of you.

Now if someone tells you a question, you can use Wikipedia to try and figure out what the answer is, but you can only point to one of the two objects in the room. There is no other answer.

So they may ask “Which one is an Orange?”. You may think¬†that a cube is similar to the Discovery Cube in Orange county. You can also see that a food pyramid has an orange in it. Neither is a direct fit, but you only have two answers.

So you respond:¬†“I am 51% sure that it is this pyramid”

After:

Now you are in the same room, but this time there is a window that shows you the outside world.

You are asked the same question. You still come to the same conclusion, but because you can see the outside world you know that the answer is not in the room.

This time you respond:¬†“I am confident that neither of these objects are an Orange”

But what about the lower confidence?

The first thing you notice is that the confidence is not as high as before. This in itself is not a bad thing. It is the relationship of the answer to the others found. For example:

conv060217-2

You can see in this example the first answer is 72%, while the next one is 70%. So it is either a compound question, or you need training to differentiate between the two intents that are close together. In the previous version you could not see this.

The main point to take from this, the confidence hasn’t actually changed. You are just finally seeing the real confidence.

How does this impact me?

First Watson would always ignore an intent if the confidence is <0.2. But how the confidences were previously determined, it was rare that you would hit this condition.

Now this is possible.

Also if you have written conditions to determine the real confidence boundary (detailed here), you need to determine the correct boundaries.

Lastly if no intent is matched, the you get an empty intents list.

In closing

Although the new feature is considerably better, always test before you deploy!

As for the title reference: