Testing your intents

So this really only helps if you are doing a large number of intents, and you have not used entities as your primary method of determining intent.

First lets talk about perceived accuracy, and what this is trying to solve. Perceived accuracy is where someone will type in a few questions they know the answer to. Then depending on their manual test they perceive the system to be working or failing.

It puts the person training the system into a false sense of how it is performing.

If you have done the Watson Academy training for Conversation you will hear it mention K-fold testing. For this blog post, I’m going to skip the details as I briefly mentioned before.

K-fold cross validation : You split your training set into random segments (K). Use one set to test and the rest to train. You then work your way through all of them. This method will test everything, but will be extremely time-consuming. Also you need to pick a good size for K so that you can test correctly.

K-Fold works well by itself if you have a large training set that has come from a real world representative users. You will find this rarely happens. So you should use in conjunction with a blind.

Previously I didn’t cover how you actually do the test. So with that, here is the notebook giving a demonstration:

 

 

Pushing My Buttons

So this is a long time pet peeve, but recently I have seen a load of these in succession. I am sure a lot of people who know me are going to read this and think “He’s talking about me”. Truth is there is no one person I am pointing my finger at.

Let me start with what triggered this post. Have a look at this screen shot. There are three things wrong with it, although one of the reasons is not visible, but you can guess.

poorUXdesign

So disclosure, this is a competitors chat bot, it is also a common pattern I have seen on that chat bot. But I have also seen people do this with Watson Conversation.

Did you guess the issues? 

Issue 1: Never ask the end user did you answer them correctly or not. If your system is well trained, and tested then you are going to know if it answered well or not.

Those who think of a rebuttal to this, imagine you rang a customer support person and they asked you “Did I answer you correctly” every time they gave an answer? What would your action be? More than likely you would ask to speak to someone who does know what they are talking about.

If you really need to get feedback, make it subtle, or ask for a survey at the end.

Issue 2: BUTTONS. I don’t know who started this button trend, but it has to die. You are not building a cognitive conversational system. You are building an application. You don’t need an AI for buttons, any average developer can build you a button based “Choose your own adventure“.

Issue 3: Not visible on the image is that you are stuck until you click on yes or no. You couldn’t say yes or no, or “I am not sure”. For that matter I have seen cases where the answer is poorly written and the person would take the wrong answer as right, so what happens then?  For that matter selecting yes or no does nothing to progress the conversation.

So what is the root cause in all this?  From what I have seen normally it is one thing.

Developers.

Because older chat bots required a developer to build, it has sort of progressed along those lines for some time. In fact some chat bot companies tout the fact that it is developer orientated, and in some cases only offer code based systems.

I’ve also gotten to listen to some developers tell me how Watson Conversation sucks (because “tensor flow”), or they could write better. I normally tell them to try.

Realistically to make a good chat bot, the developer is generally far down the food chain in that creation. Watson conversation is targeted at your non-technical person.

Heres a little graphic to help.

beep beep robot

Now your chances of getting all these is hard, but the people you do get should have some skills in these areas. Let’s expand on each one.

Business Analyst

By far the most important, certainly at the start of the project.

Most failed chat bot projects are because someone who knows the business hasn’t objectively looked at what it is you are trying to solve, and if it is even worth the time.

By the same token, I have seen two business analysts create a conversational bot that on the face of it looked simple, but they could show that it saved over a million euros a year. All built in a day and a half. Because they knew the business and where to get the data.

Conversational Copywriter

Normally even getting a copywriter makes a huge difference, but one with actual conversational experience makes the solution shine. It’s the difference between something clinical, and something your end user can make an emotional attachment to.

Behavioural Scientist

Another thing I see all the time. You get an issue in the chat conversation that requires some complexity to solve. So you have your developer telling you how they can build something custom and complex to solve the issue (probably includes tensor flow somewhere in all of it).

Your behavioural expert on the other hand will suggest changing the message you tell the end user. It’s really that simple, but often missed by people without experience in this area.

Subject Matter Expert (SME)

To be fair, at least on projects I’ve seen there is normally an SME there. But there are still different levels of SMEs. For example your expert in the material, may not be the expert that deals with the customer.

But it is dangerous to think that just because you have a manual you can reference, that you are capable to building a system that can answer questions as if it is an SME.

Data Scientist

While you might not need a full blown one, all good conversational solutions are data driven. In what people ask, behaviours exhibited and needs met. Having someone able to sift through the existing data and make sense of it, helps make a good system.

Also almost every engagement I’ve been on, people will tell you what they think the end user will say or do. But often it is never the case, and the data shows this.

UI/UX

What the conversational copywriter does for the engaging conversation, the UI/UX does for the system. If you are using existing channels like Facebook, Skype, Messenger, Slack, etc.. then you probably don’t need to worry as much. But it’s still possible to create something that can upset the user without good UX experience.

It’s also a broad skill area. For example, UX for Web is very different to Mobile, IVR, and Robots.

Machine Learning

Watson conversation abstracts the ML layer from the end user. You only need to know how to cluster questions correctly. But knowing how to do K-Fold cross validation, or the importance of blind sets helps in training the system well.

It also helps if your developers have at least a basic understanding of machine learning.

I often see non-ML developers trying to fix clusters with comments like. “It used this keyword 3 times, so that’s why it picked this over that”, which is not how it works at all.

It also prevents your developers (if they code the bot) to create something that is entity heavy. Non-ML Developers seem to like entities, as they can wrap their head around them. Fixed keywords, regex, all makes sense to a developer, but in the long run make the system unmaintainable (basically defeats the purpose of using Watson conversation).

Natural Language Processing (NLP)

I’ve made this the smallest. There was a time, certainly with the early versions of Watson you needed these skills. Not so much anymore. Still, it’s good to understand the basics of NLP, certainly for entities.

Developer

In the scheme of things, there will always be a place for the developer.

You have UI development, application layer, back-end and integration, automation, testing, and so on.

Just development skills alone will not help you in building something that the end user can feel a connection to.

… and please, stop using buttons. 

I have no confidence in Entities.

I have something I need to confess. I have a personal hatred of Entities. At least in their current form.

There is a difference between deterministic and probabilisitic programming, that a lot of developers new to Watson find it hard to switch to. Entities bring them back to that warm place of normal development.

For example, you are tasked with creating a learning system for selling CatsDogs, and Fishes. Collecting questions you get this:

  • I want to get a kitten
  • I want to buy a cat
  • Can I get a calico cat?
  • I want to get a siamese cat
  • Please may I have a kitty?
  • My wife loves kittens. I want to get her one as a present.
  • I want to buy a dog
  • Can I get a puppy?
  • I would like to purchase a puppy
  • Please may I have a dog?
  • Sell me a puppy
  • I would love to get a hound for my wife.
  • I want to buy a fish
  • Can I get a fish?
  • I want to purchase some fishes
  • I love fishies
  • I want a goldfish

The first instinct is to create a single intent of #PURCHASE_ANIMAL and the create entities for the cats, dogs and fishes. Because it’s easier to wrap your head around entities, then it is to wonder how Watson will respond.

So you end up with something like this:

20170910entities

Wow! So easy! Let’s set up our dialog. To make it easier, lets use a slot.

20170910entitiesA

In under a minute, I have created a system that can help someone pick an animal to buy. You even test it and it works perfectly.

IT’S A TRAP!

First the biggest red flag with this is you have now turned your conversation into a deterministic system.

Still doing cross validation to test your intents? Give up, it’s pointless.

You can break it by just typing something like “I want to buy a bulldog”. You are stuck into an endless loop.

The easiest solution is to tell the person what to type, or link/button it. But it doesn’t exhibit intelligence (and I hate buttons more than I hate entities 🙂 ).

The other option is to add “bulldog” to the @Animals:Dog entity. But when you go down that rabbit hole, you could realistically add the following.

  • 500+ types of breeds.
  • Common misspellings of those words.
  • plurals of each breed.
  • slang, variations and nicknames of those animals.

You are easily into the thousands of keywords to match, and all it takes is one person to make a typo you don’t have in the list and it still won’t work.

Using entities in a probabilistic way.

All is not lost! You can still use entities, and keep your system intelligent. First we break up the intents into the types of animals like so:

20170910entitiesB

So now if I type “I want to buy a bulldog” I get #PurchaseDog with 68% confidence. Which is great, as I didn’t even train it on that word.

So next I try “I want to buy a pet” and I get #PurchaseCat with 55% confidence.

20170910entitiesCat

Hmm, great for cat lovers but we want conversation to be not so sure about this.

So we create the entities as before for Cat, Dog, Fish. You can use the same values.

Next before you check intents, add a node with the following condition.

20170910entitiesC

This basically ensures that irrelevant hasn’t been hit, and then checks if the animal entities have not been mentioned.

Then in your JSON response you add the following code.

{
    "context": {
    "adjust_confidence": "<? intents[0].confidence = intents[0].confidence - 0.36 ?>"
    },
    "output": {
        "text": {
            "values": [],
            "selection_policy": "sequential"
        }
    }
}

The important part is the “adjust_confidence” context variable. This will lower the first intents confidence by 0.36 (36%).

We set the node to jump to the next node in line, so it can check the intents.

Now we get “I don’t understand” for the pet question. Bulldog still works as it doesn’t fall below the 20%.

Demo Details.

I used 36% for the demo, but this will vary in other projects. Also if your confidence level is too high, you can pick a smaller value, and then have another check for a lower bound. In other words, set your conversation to ignore any intent with a confidence lower then 30%, and then set your adjustment confidence to -10%.

Advantages

Using this approach, you don’t need to worry as much with training your entities, only your intents. This allows you to build a probabilistic model which isn’t impacted unless it is unsure to begin with.

I have supplied a Sample Conversation which demos above.

Anaphora? I hardly knew her.

One of common requests for conversation is being able to understand the running topic of a conversation.

For example:

USER: Can I feed my goldfish peas?

WATSON: Goldfish love peas, but make sure to remove the shells!

USER: Should I boil them first?

The second response “them” is called an “anaphora”. The “them” refers to the peas. So you can’t answer the question without first knowing the previous question.

On the face of it, it looks easy. But you have “goldfish”, ‘peas”, ‘shells” which could potentially be the reference, and no one wants to boil their goldfish!

So the tricky part is determining the topic. There are a number of ways to approach this.

Entities

The most obvious way is to determine what entity the person mentioned, and store that for use later. This works well if the user actually mentions an entity to work with. However in a general conversation, the subject of the conversation may not always be by the person who asks the question.

Intents

When asking a question and determining the intent, it may not always be that an entity can be involved. So this has limited help in this regards.

That said, there are certain cases where intents have been used with a context in mind. So it can be easily done by creating a suffix to the intent. For example:

    #FEEDING_FISH_e_Peas

In this case we believe that peas is a common entity that has a relationship to the intent of Feeding Fish. For coding convention we use “_e_” to denote that the following piece of the intent name is an entity identifier.

At the application layer, you can do a regex on the intent name “_e_(.*?)$” for the group 1 result. If it is not blank, store it in a context variable.

 

Regular Expressions

Like before, you can use regular expressions to capture an earlier pattern to store it at a later point.

One way to approach this is have a gateway node that activates before working through the intent tree. Something like this:

example1507

The downside to this is that there is a level of complexity to maintain in a complex regular expression.

You can make at least maintaining a little easier by setting the primary condition check as “true” and then individual checks in the node itself.

example1507a

Answer Units

An answer unit is the text response you give back to the end user. Once you have responded with an answer, you have created a lot of context within that answer that the user may follow up on. For example:

example1507b

Even with the context markers of the answer, the end user may never pick up on them. So it is very important to craft your answer that will drive the user to the context you have selected.

NLU

The last option is to pass the questions through NLU. This should be able to give you the key terms and phrases to store as context. As well as create knowledge graph information.

I have the context. Now what?

When the user gives a question that does not have context, you will normally get back low confidence intents, or irrelevant response.

If you are using Intent based context, you can check the returning intents for a similar context to what you have stored. This also allows you to discard unrelated intents. The results from this are not always stellar, but offer a cheaper one time call.

The other option you can take is to preload the question that was asked and send it back. For example:

  PEAS !! Can I boil them first?

You can use the !! as a marker that your question is trying to determine context. Handy if you need to review the logs later.

As time passes…

So as the conversation presses on, what the person is talking about can move away from the original context, but it may still remain the dominant. One solution is to build a weighted context list.

For example:

"entity_list" : "peas, food, fish"

In this case we maintain the last three context found. As a new context is found, it uses LIFO to maintain the list. Of course this means more API calls, which can cost money.

Lowering calls on the tree.

Another option in this to create a poor mans knowledge graph. Let’s say the last two context were “bowl” and “peas”. Rather then creating multiple context nodes, you can build a tree which can be passed back to the application layer.

"entity" : "peas->food->care->fish"
...
"entity" : "bowl->care->fish"

You can use something like Tinkerpop to create a knowledge graph (IBM Graph in Bluemix is based on this).
example1507c

Now when a low confidence question is found, you can use “bowl”, “peas” to disambiguate, or use “care” as the common entity to find the answer.

Talk… like.. a… millennial…

One more common form of anaphora that you have to deal with, is how people talk on instant messaging systems. The question is often split across multiple lines.

example1507d

Normal conversation systems take one entry, and give one response. So this just wreaks their AI head. Because not only do you need to know where the real question stops, but where the next one starts.

One way to approach this is capture the average timing mechanism between each entry of the user. You can do this by passing the timestamps from the client to the backend. The backend can then build an average of how the user talks. This needs to be done at the application layer.

Sadly no samples this time, but it should give you some insights into how context is worked with a conversation system.

Removing the confusion in intents.

While the complexity of building Conversation has reduced for non-developers, one of the areas that people can sometimes struggle is training intents.

When trying to determine how the system performs, it is important to use tried and true methods of validation. If you go by someone just interacting with the system you can end up with what is called “perceived accuracy”. A person may ask three to four questions, and three may fail. Their perception becomes that the system is broken.

Using cross validation allows you to give a better feel of how the system is working. As will a blind / test set. But knowing how the system performs and trying to trying to interpret the results is where it takes practise.

Take this example test report. Each line is where a question is asked, and it determines what the answer should be, versus what answer came back. The X denotes where an answer failed.

report_0707

Unless you are used to doing this analysis full time, it is very hard to see the bigger picture. For example, is DOG_HEALTH the issue, or is it LITTER_HEALTH + BREEDER_GUARANTEE?

You have to manually analyse each of these clusters and determine what changes are required to be made.

Thankfully Sci-Kit makes your life easier with being able to create a confusion matrix. With this you can see how each intent performs against each other. So you end up with something like this:

confusion_matrix

So now you can quickly see what Intents are getting confused with others. So you focus on those to improve your accuracy better. In the example above, DOG_HEALTH and BREEDER_INFORMATION offer the best areas to investigate.

I’ve created a Sample Notebook which demonstrates the above, so you can modify to test your own conversations training.

 

Watson Conversation just got turned up to 11.

I just wanted to start with why my blog doesn’t get updated that often.

  1. Life is hectic. With continual travel, and a number of other things going on, the chance to be able to sit down for peace and quiet is rare. That may change one way or another soon.
  2. I try to avoid other peoples thunder on a range of issues. For example I recommend keeping an eye on Simon Burns blog in relation to the new features.
  3. My role continues to evolve, at the moment that is one of “Expert Services”, where people like myself go around the world to help our customers run the show themselves. This probably warrants it’s own page, but it is a balance between what is added value to that service, and what everyone should know. (I’m in the latter camp, but I got to eat too).

The biggest reason for lack of updates? Development keep changing things! I’m not the only one that suffers from this. I read a fantastic design patterns document earlier this year by someone in GBS, which went mostly out of date a week or two after I got it.

So it has been the same situation with the latest update to conversation. It is a total game changer. To give you an example of how awesome it is, here is an image of a sample “I want to buy a dog” dialog flow.

pre-slots

Now compare that to the new Slots code. (btw, you may have noticed the UI looks cooler too).

slots

The same functionality took a fraction of the time, and has even more complexity of understanding than the previous version. That single node looks something like this:

slots2

That’s all for the moment, better if you play with it yourself. I have some free time shortly, and I will be posting some outstanding stuff.

 

I love Pandas!

Not the bamboo eating kind (but they are cute too), Python Pandas!

But first… Conversation has a new feature!

Logging! 

You can now download your logs from your conversation workspace into a JSON format. So I thought I’d take this moment to introduce Pandas. Some people love the “Improve” UI, but personally I like being able to easily mold the data to what I need.

First, if you are new to Python, I strongly recommend getting a Python Notebook like Jupyter set up or use IBM Data Science Experience. It makes learning so much easier, and you build your applications like actual documentation.

I have a notebook created so you can play along.

Making a connection

As the feature is just out, the SDK’s don’t have the API for it, so I will be using requests library.

url='https://gateway.watsonplatform.net/conversation/api/v1/workspaces/WORKSPACE_ID/logs?version=2017-04-21'
basic_auth = HTTPBasicAuth(ctx.get('username'), ctx.get('password'))
response = requests.get(url=url, auth=basic_auth)
j = json.loads(response.text)

So we have the whole log now sitting in j but we want to make a dataframe. Before we do that however, let’s talk about log analysis and the fields you need. There are three areas we want to analyse in logs.

Quantitive – These are fixed metrics, like number of users, response times, common intents, etc.

Qualitative – This is analysing how the end user is speaking, and how the system interpreted and responded. Some examples would be where the answer returned may give the wrong impression to the end user, or users ask things out of expected areas.

Debugging – This is really looking for coding issues with your conversation tree.

So on to the fields that cover these areas. These are all contained in j['response'].

Field Usage Description
input.text Qualitative This is what the user or the application typed in.
intents[] Qualitative This tells you the primary intent for the users question. You should capture the intent and confidence into columns. If the value is [] then means it was irrelevant.
entities[] Quantitive The entities found in relation to the call. With this and intents though, it’s important to understand that the application can override these values.
output.text[] Qualitative This is the response shown to the user (or application).
output.log_messages Debugging Capturing this field is handy to look for coding issues within your conversation tree. SPEL errors show up here if they happen.
output.nodes_visited Debugging
Qualitive
This can be used to see how a progression through a tree happens
context.conversation_id All Use this to group users conversation together. In some solutions however, one pass calls are sometimes done mid conversation. So if you do this, you need to factor that in.
context.system.branch_exited Debugging This tells you if your conversation left a branch and returned to root.
context.system.branch_exited_reason Debugging If branch.exited is true then this will tell the why. completed means that the branch found a matching node, and finished. fallback means that it could not find a matching node, so it jumps back to root to find the match.
context.??? All You may have context variables you want to capture. You can either do these individually, or code to remove conversation objects and grab what remains
request_timestamp Quantitive
Qualitative
When conversation received the users response.
response_timestamp Quantitive
Qualitative
When conversation responded to the user. You can do a delta to see if there are conversation performance issues, but generally keep one of the timestamp fields for analysis.

 

So we create a row array, and fill it with dict objects of the columns we want to capture. For clarity of the blog post, the sample code below

import pandas as pd
rows = []

# for object in Json Logs array.
for o in j['logs']:
    row = {}
 
    # Let's shorthand the response object.
    r = o['response']
 
    row['conversation_id'] = r['context']['conversation_id']
 
    # We need to check the fields exist before we read them. 
    if 'text' in r['input']: row['Input'] = r['input']['text']
    if 'text' in r['output']:row['Output'] = ' '.join(r['output']['text'])
 
    # Again we need to check it is not an Irrelevant response. 
    if len(r['intents']) > 0:
        row['Confidence'] = r['intents'][0]['confidence']
        row['Intent'] = r['intents'][0]['intent']

    rows.append(row)

# Build the dataframe. 
df = pd.DataFrame(rows,columns=['conversation_id','Input','Output','Intent','Confidence'])
df = df.fillna('')

# Display the dataframe. 
df

When this is run, all going well you end up with something like this:

report1-1804

The notebook has a better report, and is also sorted so it is actually readable.

report2-1804

Once you have everything you need in the dataframe, you can manipulate it very fast and easy. For example, let’s say you want to get a count of the intents found.

# Get the counts.
q_df = df.groupby('Intent').count()

# Remove all fields except conversation_id and intents. 
q_df = q_df.drop(['request TS', 'response TS', 'User Input', 'Output', 'Confidence', 'Exit Reason', 'Logging'],axis=1)

# Rename the conversation_id field to "Count".
q_df.columns = ['Count']

# Sort and display. 
q_df = q_df.sort_values(['Count'], ascending=[False])
q_df

This creates this:

report3-1804

The Jupyter notebook also allows for visualisation of data as well. Although I haven’t put any in the sample notebook.

Watson in the black and white room.

Let’s talk about the recent changes of how Watson determines it’s confidence. It seems to be a hot topic at the moment, and probably not best understood.

 

Before: 

Imagine that you are Watson, you are in a room with no doors or windows. You have learned everything about the world from Wikipedia. There is two objects, a cube and a pyramid in front of you.

Now if someone tells you a question, you can use Wikipedia to try and figure out what the answer is, but you can only point to one of the two objects in the room. There is no other answer.

So they may ask “Which one is an Orange?”. You may think that a cube is similar to the Discovery Cube in Orange county. You can also see that a food pyramid has an orange in it. Neither is a direct fit, but you only have two answers.

So you respond: “I am 51% sure that it is this pyramid”

After:

Now you are in the same room, but this time there is a window that shows you the outside world.

You are asked the same question. You still come to the same conclusion, but because you can see the outside world you know that the answer is not in the room.

This time you respond: “I am confident that neither of these objects are an Orange”

But what about the lower confidence?

The first thing you notice is that the confidence is not as high as before. This in itself is not a bad thing. It is the relationship of the answer to the others found. For example:

conv060217-2

You can see in this example the first answer is 72%, while the next one is 70%. So it is either a compound question, or you need training to differentiate between the two intents that are close together. In the previous version you could not see this.

The main point to take from this, the confidence hasn’t actually changed. You are just finally seeing the real confidence.

How does this impact me?

First Watson would always ignore an intent if the confidence is <0.2. But how the confidences were previously determined, it was rare that you would hit this condition.

Now this is possible.

Also if you have written conditions to determine the real confidence boundary (detailed here), you need to determine the correct boundaries.

Lastly if no intent is matched, the you get an empty intents list.

In closing

Although the new feature is considerably better, always test before you deploy!

As for the title reference: 

Compound Questions

One problem that is tricky to solve is if a user has asked two questions. Previously some solutions were to look for conjunctions (“and”) or question marks. Then try to guess if it is a question.

But you could end up with a question like “Has my dog been around other dogs and other people?”. This is clearly one question.

With the new conversation feature of “Absolute Confidences”, it is now possible to detect this. Earlier versions of conversation would have all intents would add up to 1.0.

Now each confidence has it’s own value. Taking the earlier example, if we map the confidences to a chart, we get:

conv060217-1

Visually we can see that the first and second intent are not related. The next sentence “Has my dog been around other dogs and is it certified?” is two questions. When we chart this we see:

conv060217-2

Very easy to see that there are two questions. So how to do it in your code?

You can use a clustering technique called K-means. This will cluster your data into sets of ‘K’. In this case we have “important intents” and “unimportant intents”. Two groups, means K = 2.

For this demonstration I am going to use Python, but K-means exists in a number of languages. I have a sample of the full code, and example conversation workspace. So for this I will only show code snippets.

Walkthrough

Conversation request needs to set alternate_intents to true. So that you can get access to the top 10 intents.

Once you get your response back, convert your confidence list into an array.

intent_confidences = list(o['confidence'] for o in response['intents'])

Next the main method will return True if it thinks it is a compound question. It requires numpy + scipy.

def compoundQuestion(intents):
    v = np.array(intents)
    codebook, _ = kmeans(v,2)
    ci, _ = vq(v,codebook)

    # We want to make everything in the top bucket to have a value of 1.
    if ci[0] == 0: ci = 1-ci
    if sum(ci) == 2: return True
    return False

The first three lines will take the array of confidences and generate two centroids. A centroid is the mean of each cluster found. It will then group each of the confidences into one of the two centroids.

Once it runs ci will look something like this: [ 0, 0, 1, 1, 1, 1, 1, 1, 1, 1 ] . This however can be the reverse.

The first value is the first intent. So if the first value is 0 we invert the array and then add up all the values:

[ 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ] => 2 

If we get a value of 2, then the first two intents are related to the question that was entered. Any other value, then we only have one question, or potentially more than two important intents.

Example output from the code:

Has my dog been around other dogs and other people?
> Single intent: DOG_SOCIALISATION (0.9876400232315063)

Has my dog been around others dogs and is it certified?
> This might be a compound question. Intent 1: DOG_SOCIALISATION (0.7363447546958923). Intent 2: DOG_CERTIFICATION (0.6973928809165955).

Has my dog been around other dogs? Has it been around other people?
> Single intent: DOG_SOCIALISATION (0.992318868637085)

Do I need to get shots for the puppy and deworm it?
> This might be a compound question. Intent 1: DOG_VACCINATIONS (0.832768440246582). Intent 2: DOG_DEWORMING (0.49955931305885315).

Of course you still need to write code to take action on both intents, but this might make it a bit easier to handle compound questions.

Here is the sample code and workspace.

Improving your Intents with Entities.

You might notice that when you update your entities that Conversation says “Watson is training on your recent changes”. What is happening is that Intents and Entities work together in the NLU engine.

So it is possible to build entities that can be referenced within your intents. Something similar to how Dialog entities. Work.

For this example I am going to use two entities.

  • ENTITY_FOODSTUFF
  • ENTITY_PETS

In my training questions I create the following example.

conv-150118-1

The #FoodStore question list is exactly the same, only the entity name is changed.

Next up create your entities. It doesn’t matter what the entity itself is called, only that if has one value that mentions the entity identifiers above. I have @Entity set to the same as the value for clarity.

conv-150118-2

conv-150118-3

 

“What is the point?” you might ask? Well you will notice that both entities have a value of “fish”.

When I ask “I want to get a fish” I get the following back.

  • FoodStore confidence: 0.5947581078492985
  • Petshop confidence: 0.4052418921507014

So Watson is not sure, as both intents could be the right answer. This is what you would expect.

Now after we delete the “fish” value from both entities, I then add the same training question “I want a fish” to both intents. After Watson has trained and I ask “I want to get a fish”, you get the following back.

  • Petshop confidence: 0.9754140796608233
  • FoodStore confidence: 0.02458592033917674

Oh dear, now it appears to be more confident then it should be. So entities can help in making questions ambiguous if training is not helping.

This is not without it’s limitations.

Entities are fixed keywords, and the intents will treat them as such. So while it will find “fish” in our example, it won’t recognise “fishes” unless it’s explicitly stated in the entity.

Another thing to be wary of is that all entities are used in the intents. So if a question mentioned “toast”, then @ENTITY_FOODSTUFF becomes a candidate in trying to determine which intent is correct.

The last thing to be aware of is that training questions take priority over entities when it comes to determining what is correct.

If we were to add a training question “I want fishes” to the first example. Then ask the earlier question, you would find that foodstore now takes priority. If we add “I want fishes” to both intents and ask the question “I want to get a fish”, you will get the same results as if the entities never had the word “fish” in it.

This can be handy for forcing common spelling mistakes that may not be picked up, or clearly defined domain keywords a user may enter (eg. product ID)