Annotate it

So this was an interesting problem that was posed to me. Take the following intent below.

intentlist_2307.png

This intent will try to detect where someone is asking to select results by criteria. Next up let’s create the entities based on the intents. I will be using the original method of creating entities. You end up with this.

entities_2307.png

So let’s test this out…

test1_2307.png

Oh dear! It is seeing “it” as “IT Department”. This is not good.

Thankfully Watson Assistant just recently got Contextual entities. The new engine is able to understand the nature of what the entity really means, as long as you annotate it.

So going into the intent again, I have selected each word and marked it up like so:

annotate_2307

Now let’s test it again.

test2_2307.png

Now it understands that it is not the IT department. Let’s try again.

test3_2307.png

Woah!

It not only worked, but it created a new entity on the fly.

So once you teach it the patterns, it will capture the entities for you. This is currently on by default, but you should be able to toggle soon.

You still have to train it the different patterns you see. For example with the work I have done so far “Filter sales by marketing” will pick up marketing and sales. You would have to build an annotation to show what is the important term in that sentence.

Finally proper intelligence on your entities to augment your intents.

… Edit …
So someone asked what about “IT” as a department? That works too.

samplerun

Visualising Intents

I’ve always used Pandas for getting an overview of intents, but when you are dealing at the enterprise level ( > 300 intents ), it can be a case of not being able to see the wood for the trees.

Recently I saw a nice mind map visualising intent structures (shout out to Rahul! šŸ™‚ ). It was a manual process and a lot of work put into it.

So I looked to see if we can automate this.Ā XMind to the rescue! There is a Python library that allows you to create through code.

First I start by setting up. You can get theĀ ctx and workspaceĀ details from your assistant.

import xmind
from xmind.core import workbook, saver
from xmind.core.markerref import MarkerId
from xmind.core.topic import TopicElement
from watson_developer_cloud import ConversationV1
from urllib.parse import urlparse, parse_qs
import pandas as pd
import os

ctx = {
    "url": "https://gateway-fra.watsonplatform.net/assistant/api",
    "username": "USERNAME",
    "password": "PASSWORD"
}

version = '2018-07-10'
workspace = 'WORKSPACE'

xmind_file = 'intents.xmind'

The XMind library will create a file if it doesn’t exist. But if the file already exists, then it adds to it. So we need to delete it before we continue.

if os.path.exists(xmind_file): os.remove(xmind_file)

This next piece of code allows you to capture all the intents directly from the workspace. In a large scale workspace, you will generally have pages of intents, so this handles that.

wa = ConversationV1( username=ctx.get('username'), password=ctx.get('password'), version=version, url=ctx.get('url'))

j = []
x = { 'pagination': 'DUMMY' }
cursor = None
while 'pagination' in x:
    x = wa.list_intents(workspace_id=workspace, export=True,cursor=cursor)
    j.append(x['intents'])
    if 'pagination' in x and 'next_cursor' in x['pagination']:
        cursor = x['pagination']['next_cursor']
    else:
        x = {}

recs = []
for i in j: 
    for k in i: 
        record = { 
            'intent': k['intent'],
            'total': len(k['examples'])
        }
        recs.append(record)

df = pd.DataFrame(recs,columns=['intent','total'])
df = df.sort_values(by=['intent'])

This last piece of code takes the dataframe created with the question and intent, then turns it into a MindMap. Each node will display the intent name and how many examples in that intent. For intents >20 it will have a green star, while <10 will have a red star.

I am also using the first word before the underscore as the category.

x = xmind.load(xmind_file)

sheet = x.getPrimarySheet()
sheet.setTitle('Intents Summary')

root = sheet.getRootTopic()
root.setTitle('Intents')

current_id = None
for index, row in df.iterrows():
    id = row['intent'].split('_')[0]
    intent = '{} ({})'.format(row['intent'].replace('{}_'.format(id),''),row['total'])

    if id != current_id:
        topic = root.addSubTopic()
        current_id = id
        topic.setTitle(id)

    item = topic.addSubTopic()
    item.setTitle(intent)

    if row['total'] > 20:
        item.addMarker(MarkerId.starGreen)
    elif row['total'] < 10:
        item.addMarker(MarkerId.starRed)

xmind.save(x, xmind_file)
print('All done!')

Using the catalog intents as an example (and intentionally modifying/removing some) you end up with something like this:

Screen Shot 2018-07-22 at 22.51.46

You can build a more complex one with the examples as well, but when you are dealing with 1000’s of questions, it gets a little unwieldy.

What is your name revisited.

As I mentioned in my previous post, Watson Assistant has a system entity called @sys-name, which allows you to capture a persons name. One issue with this is that it is not available for every language.

In the original post I mention using entity extraction. You can still do this, but the cloud functions feature makes this so much easier.

The instructions for doing this are very well documented, so I intentionally skip over bits. Please use this as a reference.

First I created a Cloud function Action with the following code:

import sys
from watson_developer_cloud import NaturalLanguageUnderstandingV1
from watson_developer_cloud.natural_language_understanding_v1 import Features, EntitiesOptions, KeywordsOptions

nlu = NaturalLanguageUnderstandingV1(
version='2017-02-27',
url='https://gateway-fra.watsonplatform.net/natural-language-understanding/api',
username='USERNAME',
password='PASSWORD')


def main(dict):
    rsp = nlu.analyze(text=dict['input'], features=Features(entities=EntitiesOptions()))

    username = ''
    company = ''

    for entity in rsp['entities']:
        if entity['type'] == 'Person':
            username = entity['text']
        elif entity['type'] == 'Company':
            company = entity['text']

    response = { 
        'name': username, 
        'company': company 
    }

    return response

On the parameters page I set up two parameters “input” and “language“. The language tag is to allow to use different languages where @sys-person may not exist.

On the end point page, you need to copy the API key and break into name:password as per the instructions link. Keep a note of it.

Now in Watson Assistant create a node that triggers and the following json code. Replace username/password with one from cloud function. Alternatively use the proper credentials formatting.

{
    "context": {
        "mycreds": {
            "user": "USERNAME",
            "password": "PASSWORD"
        },
        "nlu_response": ""
    },
    "output": {
        "text": {
            "values": [],
            "selection_policy": "sequential"
        }
    },
    "actions": [
     {
        "name": "simon_test_area/nlu_lookup",
        "type": "server",
        "parameters": {
        "input": "<? input.text ?>",
        "language": "en"
     },
    "credentials": "$mycreds",
    "result_variable": "$nlu_response"
    }
    ]
}

This will execute the cloud function and return the name and company (if they exist). Have this node skip to a child node which will execute the response. For my sample I have:

Name: $nlu_response.name<br>Company: $nlu_response.company

This is what you get back.

Screen Shot 2018-07-22 at 22.25.17

Very simple and very powerful. Combine this with Watson Knowledge Studio and you can build intelligence for your domain.

Six months later…

I had planned to create an update frequently, but life and more importantly work got in the way. With the new role, a lot of focus is on the other aspects of AI related technologies.

Of course while things are different for me, the chat bot world continues on. Watson Conversation becomes Watson Assistant. With a huge number of updates and changes.

My blog continues to be a source for numerous people starting with Watson Assistant. But Watson development have made changes that makes most redundant.

So until my next update (soon, I promise) let me give a brief update to every blog entry and give Watson Development the credit they deserve.

Note: I mention Beta a lot. In your workspace UI, you now are able to request access to the Watson Assistant Beta, and try out all the new features that are coming. I also didn’t reference every blog entry. If it’s not in the list it’s probably still the same, or not important enough to mention.

Testing your Intents

K-Fold and blind testing is still very much a part of ensuring you have trained the system well. But for those who are currently playing with the Beta will know, there are features coming that reduce this need or even make it redundant (jury is still out on this. I’m favoring the latter).

There are number of Watson Assistant K-Fold and testing apps up on github, if you don’t want to try decipher my example.

Pushing my Buttons

While I can see a need for buttons in some cases, I still believe it destroys the conversational aspect of a chat bot. Overuse turns your chat bot into an app, and damages training. I am also against having in a conversational text response.

Chihuahua or Muffin, revisited.

Huge updates have been made to Watson Visual Recognition (VisRec). The main being it is now integrated into Watson Studio. If you haven’t tried Watson Studio yet, go now! šŸ™‚ It is a Data Science / Machine Learning / Deep Learning development platform.

Watson VisRec still continues to amaze people at how quickly and accurately it can classify custom content, we also have Watson Media which can annotate live video. If RAW POWER is needed, there is PowerAI Vision, which allows for real time classification on video. We are talking “Person of Interest” level classification. šŸ™‚

I have no confidence in Entities.

I have nothing but love for Entities now!Ā Gone are the “keyword” type entities. Now they are built using a ML NLP model. Not only that, it also helps to dramatically improve the training of Intents.

There are now Pattern Entities as well. These allow for complex regular expressions.

The design pattern of using entities to lower the confidence of an intent is still valid.

Manufacturing Intent

Creating manufactured questions is still very much an issue you would want to avoid.

That said, you may have seen Project Debater. While this is personal opinion (and no guarantee it will ever be a feature), I can see this technology augmenting the intent and answer creation of Watson Assistant.

Anaphora? I hardly knew her.

This is still very much a good and well used pattern. I’d love if Watson Development wrapped it into Watson Assistant, but for now it helps easily adding intelligence into your conversation.

Removing the confusion in Intents.

While this can help still in training, the current beta has features which will likely make this redundant.

Watson Conversation just got turned up to 11.

This should be “Watson Assistant” šŸ˜‰

Slots have continued to improve. You can now create more complex slot responses and handlers. Allowing to jump to nodes within the slot itself, rather then create a conditional tree.

Digressions is also a new feature which allows you to dictate when the user can go off script, and pull them back.

I love Pandas

While I still love using Pandas, Watson Assistant logging analytics has improved dramatically. Again there are features coming in the Beta which make this even better for training your enterprise level chat bots.

For those in Premium IBM Cloud there is also a feature for it to recommend topics you have never seen before, and to help collate questions for intent training of new topics.

I have a Dream …

Since writing that blog post, Watson Speech to Text now uses Deep Learning to understand human speech. There is also a feature of “Acoustic models”, which allows you to train on accents in relation to your domain language.

Compound Questions

This is still a good pattern for compound questions. The current beta features may make this redundant in certain designs.

Improving your Intents with Entities.

With the huge improvements to Entities, I would consider this an anti-pattern. So ignore.

Conversing in three dimensions.

This is still a valid pattern. In fact I’ve seen it used in a number of very successful implementations. Again though, playing with the beta will show this may become redundant from a coding perspective.

Data Science Experience

It’s now “Watson Studio”! There are so many new features in this. The most important being the support of Deep Learning.

Since I wrote that blog you have the following new features (may be more).

  • SPSS modeler. Similar to the desktop version, but allows you to use the raw power of IBM cloud. You can still import/export your streams between the desktop version.
  • Data Refinery. There is a quote that 60% of all data science work is cleaning up data. This helps in doing this.
  • Watson Machine Learning. You give it your data, tell it what you want to use to determine the outcome. It will then run through multiple machine learning models, to find the best one that will work for your data. Something that requires a ML expert to do for even mundane stuff. It even creates the API and a test UI for you.
  • Experiments. Allows you to run numerous models and tweak to find the best one for your data.
  • Neural Network Modeler. Allows you to build your tensorflow / pytorch / keras / caffe models in a simple GUI. It will then write your code which you can export to your applications. Here is a good article on it.

Much more than this as well. Try it out, it’s free. šŸ™‚

Building a Conversation interface in minutes.

Watson Assistant now has the ability to create a number of conversation interfaces through the workbench (little to no coding in some instances). For example, Facebook and Slack.

Understanding how a conversation flows

This is redundant. Actually I can argue that most of what I wrote in 2016 is redundant now.

Conversation now has Folders which allows you to check a branch of nodes, but then continue the flow of the tree, instead of falling back to root.

Nodes now have a “Skip Input” which means you don’t have to put a Jump to get into the branch (which is prone to breaking if you have to add more nodes)

Digressions allow you to jump around looking for the answer and return to where you left off.

… So there you have it. Hopefully everyone is up to date. šŸ™‚ See ya soon.

Testing your intents

So this really only helps if you are doing a large number of intents, and you have not used entities as your primary method of determining intent.

First lets talk about perceived accuracy, and what this is trying to solve. Perceived accuracy is where someone will type in a few questions they know the answer to. Then depending on their manual test they perceive the system to be working or failing.

It puts the person training the system into a false sense of how it is performing.

If you have done the Watson Academy training for Conversation you will hear it mention K-fold testing. For this blog post, I’m going to skip the details as I briefly mentioned before.

K-fold cross validationĀ : You split your training set into random segments (K). Use one set to test and the rest to train. You then work your way through all of them. This method will test everything, but will be extremely time-consuming. Also you need to pick a good size for K so that you can test correctly.

K-Fold works well by itself if you have a large training set that has come from a real world representative users. You will find this rarely happens. So you should use in conjunction with a blind.

Previously I didn’t cover how you actually do the test. So with that, here is the notebook giving a demonstration:

 

 

Pushing My Buttons

So this is a long time pet peeve, but recently I have seen a load of these in succession. I am sure a lot of people who know me are going to read this and think “He’s talking about me”. Truth is there is no one person I am pointing my finger at.

Let me start with what triggered this post. Have a look at this screen shot. There are three things wrong with it, although one of the reasons is not visible, but you can guess.

poorUXdesign

So disclosure, this is a competitors chat bot, it is also a common pattern I have seen on that chat bot. But I have also seen people do this with Watson Conversation.

Did you guess the issues?Ā 

Issue 1: Never ask the end user did you answer them correctly or not. If your system is well trained, and tested then you are going to know if it answered well or not.

Those who think of a rebuttal to this, imagine you rang a customer support person and they asked you “Did I answer you correctly” every time they gave an answer? What would your action be? More than likely you would ask to speak to someone who does know what they are talking about.

If you really need to get feedback, make it subtle, or ask for a survey at the end.

Issue 2:Ā BUTTONS. I don’t know who started this button trend, but it has to die. You are not building a cognitive conversational system. You are building an application. You don’t need an AI for buttons, any average developer can build you a button based “Choose your own adventure“.

Issue 3:Ā Not visible on the image is that you are stuck until you click on yes or no. You couldn’t say yes or no, or “I am not sure”. For that matter I have seen cases where the answer is poorly written and the person would take the wrong answer as right, so what happens then? Ā For that matter selecting yes or no does nothing to progress the conversation.

So what is the root cause in all this? Ā From what I have seen normally it is one thing.

Developers.

Because older chat bots required a developer to build, it has sort of progressed along those lines for some time. In fact some chat bot companies tout the fact that it is developer orientated, and in some cases only offer code based systems.

I’ve also gotten to listen to some developers tell me how Watson Conversation sucks (because “tensor flow”), or they could write better. I normally tell them to try.

Realistically to make a good chat bot, the developer is generally far down the food chain in that creation. Watson conversation is targeted at your non-technical person.

Heres a little graphic to help.

beep beep robot

Now your chances of getting all these is hard, but the people you do get should have some skills in these areas. Let’s expand on each one.

Business Analyst

By far the most important, certainly at the start of the project.

Most failed chat bot projects are because someone who knows the business hasn’t objectively looked at what it is you are trying to solve, and if it is even worth the time.

By the same token, I have seen two business analysts create a conversational bot that on the face of it looked simple, but they could show that it saved over a million euros a year. All built in a day and a half. Because they knew the business and where to get the data.

Conversational Copywriter

Normally even getting a copywriter makes a huge difference, but one with actual conversational experience makes the solution shine. It’s the difference between something clinical, and something your end user can make an emotional attachment to.

Behavioural Scientist

Another thing I see all the time. You get an issue in the chat conversation that requires some complexity to solve. So you have your developer telling you how they can build something custom and complex to solve the issue (probably includes tensor flow somewhere in all of it).

Your behavioural expert on the other hand will suggest changing the message you tell the end user. It’s really that simple, but often missed by people without experience in this area.

Subject Matter Expert (SME)

To be fair, at least on projects I’ve seen there is normally an SME there. But there are still different levels of SMEs. For example your expert in the material, may not be the expert that deals with the customer.

But it is dangerous to think that just because you have a manual you can reference, that you are capable to building a system that can answer questions as if it is an SME.

Data Scientist

While you might not need a full blown one, all good conversational solutions are data driven. In what people ask, behaviours exhibited and needs met. Having someone able to sift through the existing data and make sense of it, helps make a good system.

Also almost every engagement I’ve been on, people will tell you what they think the end user will say or do. But often it is never the case, and the data shows this.

UI/UX

What the conversational copywriter does for the engaging conversation, the UI/UX does for the system. If you are using existing channels like Facebook, Skype, Messenger, Slack, etc.. then you probably don’t need to worry as much. But it’s still possible to create something that can upset the user without good UX experience.

It’s also a broad skill area. For example, UX for Web is very different to Mobile, IVR, and Robots.

Machine Learning

Watson conversation abstracts the ML layer from the end user. You only need to know how to cluster questions correctly. But knowing how to do K-Fold cross validation, or the importance of blind sets helps in training the system well.

It also helps if your developers have at least a basic understanding of machine learning.

I often see non-ML developers trying to fix clusters with comments like. “It used this keyword 3 times, so that’s why it picked this over that”, which is not how it works at all.

It also prevents your developers (if they code the bot) to create something that is entity heavy. Non-ML Developers seem to like entities, as they can wrap their head around them. Fixed keywords, regex, all makes sense to a developer, but in the long run make the system unmaintainable (basically defeats the purpose of using Watson conversation).

Natural Language Processing (NLP)

I’ve made this the smallest. There was a time, certainly with the early versions of Watson you needed these skills. Not so much anymore. Still, it’s good to understand the basics of NLP, certainly for entities.

Developer

In the scheme of things, there will always be a place for the developer.

You have UI development, application layer, back-end and integration, automation, testing, and so on.

Just development skills alone will not help you in building something that the end user can feel a connection to.

… and please, stop using buttons.Ā 

Chihuahua or Muffin, revisited.

I just finished reading Maria Yao‘s article Chihuahua OR Muffin? Searching For The Best Computer Vision API. It’s a fun read, but I felt it didn’t really show off the power of Watson Visual Recognition.

For the demo in the article, the general classifier was being used.

One of the main advantages of Watson Visual recognition is that you can create your own custom classifiers. It is very simple too.

First, you need data.

Using Marias article I pulled the Chihuahua and Muffin pictures from ImageNet.

Like most data, it tends to need a bit of cleaning. So I deleted any images below 14KB in size. The reason for this was the majority at that size were just corrupted. I also went through and deleted any images which were adverts or “this image is no longer there” banners.

Overall that was 500 images deleted totally. It still left 3,000 images to play with.

Next I created a Visual Recognition service. For this I created the free version. This limited me to 250 events a day. So I had to lower my training sets to 100 pictures from each set.

I took a random 100 from each. I didn’t examine the photos at all, but here are a few to give you an idea of how the images look.

example_training

As you can see, no thought put into worrying about other items in the picture.

I zipped the images up, and created a classifier like so.

classifier

Then I clicked create, and waited a little over 10 minutes for it to analyse the pictures.

Once the classifier was finished training, I was ready to test. As some may not be aware, Visual Recognition also offers a food classifier. So for my first two tests I tried my classifier, General and Food.

test1

So you can see the red bar on the classifier I made. This is more because I only gave 100 examples. As you give more training examples, it’s confidence increases. But you can see that the difference between Muffin and Chihuahua is clear.

You can also see the food classifier got it as well.

What about the Chihuahua?

test2

As you can see, all three do quite well on classification. But what about the original pictures which look similar? I ran those through and ended up with this.

results2

As you can see it got them all right! None of these were used to train against.

As demos go though this is simple and fun. But with good classified images, it can be scary accurate for proper real world use cases.

Having said that, I did have one failure. Testing the samples on Maria’s page, it was able to understand the cookie monster muffin and the man holding the chihuahua.

But the muffin in the plastic bag with the Chihuahua it could not get. I tried cropping out the dog, but it still failed but with a lower confidence. I suspect this is a combination of training and a bad quality photo.

Screen Shot 2017-09-28 at 17.40.17

I have no confidence in Entities.

I have something I need to confess. I have a personal hatred of Entities. At least in their current form.

There is a difference between deterministic and probabilisitic programming, that a lot of developers new to Watson find it hard to switch to. Entities bring them back to that warm place of normal development.

For example, you are tasked with creating a learning system for sellingĀ Cats,Ā Dogs, andĀ Fishes. Collecting questions you get this:

  • I want to get a kitten
  • I want to buy a cat
  • Can I get a calico cat?
  • I want to get a siamese cat
  • Please may I have a kitty?
  • My wife loves kittens. I want to get her one as a present.
  • I want to buy a dog
  • Can I get a puppy?
  • I would like to purchase a puppy
  • Please may I have a dog?
  • Sell me a puppy
  • I would love to get a hound for my wife.
  • I want to buy a fish
  • Can I get a fish?
  • I want to purchase some fishes
  • I love fishies
  • I want a goldfish

The first instinct is to create a single intent of #PURCHASE_ANIMAL and the create entities for the cats, dogs and fishes. Because it’s easier to wrap your head around entities, then it is to wonder how Watson will respond.

So you end up with something like this:

20170910entities

Wow! So easy! Let’s set up our dialog. To make it easier, lets use a slot.

20170910entitiesA

In under a minute, I have created a system that can help someone pick an animal to buy. You even test it and it works perfectly.

IT’S A TRAP!

First the biggest red flag with this is you have now turned your conversation into a deterministic system.

Still doing cross validation to test your intents? Give up, it’s pointless.

You can break it by just typing something like “I want to buy a bulldog”. You are stuck into an endless loop.

The easiest solution is to tell the person what to type, or link/button it. But it doesn’t exhibit intelligence (and I hate buttons more than I hate entities šŸ™‚ ).

The other option is to add “bulldog” to the @Animals:Dog entity. But when you go down that rabbit hole, you could realistically add the following.

  • 500+ types of breeds.
  • Common misspellings of those words.
  • plurals of each breed.
  • slang, variations and nicknames of those animals.

You are easily into the thousands of keywords to match, and all it takes is one person to make a typo you don’t have in the list and it still won’t work.

Using entities in a probabilistic way.

All is not lost! You can still use entities, and keep your system intelligent. First we break up the intents into the types of animals like so:

20170910entitiesB

So now if I type “I want to buy a bulldog” I get #PurchaseDog with 68% confidence. Which is great, as I didn’t even train it on that word.

So next I try “I want to buy a pet” and I get #PurchaseCat with 55% confidence.

20170910entitiesCat

Hmm, great for cat lovers but we want conversation to be not so sure about this.

So we create the entities as before for Cat, Dog, Fish. You can use the same values.

Next before you check intents, add a node with the following condition.

20170910entitiesC

This basically ensures that irrelevant hasn’t been hit, and then checks if the animal entities have not been mentioned.

Then in your JSON response you add the following code.

{
    "context": {
    "adjust_confidence": "<? intents[0].confidence = intents[0].confidence - 0.36 ?>"
    },
    "output": {
        "text": {
            "values": [],
            "selection_policy": "sequential"
        }
    }
}

The important part is the “adjust_confidence” context variable. This will lower the first intents confidence by 0.36 (36%).

We set the node to jump to the next node in line, so it can check the intents.

Now we get “I don’t understand” for the pet question. Bulldog still works as it doesn’t fall below the 20%.

Demo Details.

I used 36% for the demo, but this will vary in other projects. Also if your confidence level is too high, you can pick a smaller value, and then have another check for a lower bound. In other words, set your conversation to ignore any intent with a confidence lower then 30%, and then set your adjustment confidence to -10%.

Advantages

Using this approach, you don’t need to worry as much with training your entities, only your intents. This allows you to build a probabilistic model which isn’t impacted unless it is unsure to begin with.

I have supplied aĀ Sample ConversationĀ which demos above.

Manufacturing Intent

Let me start this article with a warning: Ā Manufacturing questions causes more problems than it solves.

Sure the documentation, and many videos say the reverse. But they tend to give examples that have a narrow scope.

Take the car demo for example. It works because there is a common domain language that everyone who uses a car knows. For someone who has never seen a car before, won’t understand what a “window wiper” is, but they may say something like “I can’t see out the window, because it is raining”.

This is why when building your conversation system, it is important to get questions from people who actually use the system, but don’t know the content. They tend not to know how to ask a question to get the answer.

But there are times when it can’t be avoided. For example, you might be creating a system that has no end users yet. In this case, manufacturing questions can help bootstrap the system.

There are some things to be aware of.

Manual creation.

This is actually very hard, even for the experienced. Here are the things you need to be aware of.

Your education and culture will shape what you write.

You can’t avoid it. Even if you are aware of this, you will fall back into the same patterns as you progress through creating questions. It’s not easy to see until you have a large sample. Sorting can give you a quick glance, while a bag of words makes it more evident.

If you know the content, you will write what you know.

Again, having knowledge of the systems answers will have you writing domain language in the questions. You will use terms that define the system, rather than describing what it does.

If you don’t know the content, use user stories.

If you manage to get someone who could be a representative user, be careful in how you ask them to write questions. If they don’t fully understand what you ask, they will use terms as keywords, rather than their underlying meaning.

Let’s compare two user stories:

  • “Ask questions about using the window wipers.”
  • “It is raining outside while you are driving, and it is getting harder to see. How might you ask the car to help you?”

With the first example, you will find that people will use “window wipers”, “wipers”, “window” frequently. Most of the questions will be about switching it on/off.

With the second example, you may end up seeing questions like this.

  • Switch on the windshield wipers.
  • Activate the windscreen wipers.
  • Is my rain sensor activated?
  • Please clear the windows.

Your final application will shape the questions as well.

If you have your question creation team working on a desktop machine, they are going to create questions which won’t be the same as someone typing on mobile, or talking to a robot.

The internet can be your friend.

Looking for similar questions online in forums can help you in seeing terms that people may use. For example, all these mean the same thing: “NCT”, “MOT”, “Smog Test”, “RWC”, “WoF”, “COF”.

But those are meaningless to people if they are in different countries.

Automated Creation

A lot of what I have seen in automation tends not to fare much better. If it did, we wouldn’t need people to write questions. šŸ™‚

One technique is to try and create a few questions from existing questions. Again I should stress, this is generally a bad idea, and this example doesn’t work well, but might give you something to build on.

Take this example.

  • Can my child purchase a puppy?
  • Are children allowed to buy dogs?

From a manual view we can see the intent is to allow minors to buy. Going over to the code.

For this I am usingĀ Spacy, which can check the similarity of each word against each other. For example.

import spacy
nlp = spacy.load('en')

dog = nlp(u'dog')
puppy = nlp(u'puppy')
print( dog.similarity(puppy))

Will output:Ā 0.760806754875

The higher the number, the closer the words to each other. By setting a threshold on the value, you can reduce it to the important words. Setting a threshold of 0.7, we get:

Screen Shot 2017-09-09 at 18.31.18

Playing with larger questions you will find that certain parts of speech (POS) are more noise. So you can drop the following to remove possible noise.

  • DET =Ā determiner
  • PART =Ā particle
  • CCONJ = conjunction
  • ADP =Ā adposition

Now that you have reduced it to the main terms, you can buildĀ a synonym off of these, like so:

dog = nlp('dog')
word = nlp.vocab[dog.text]
sym = sorted(word.vocab, 
             key=lambda w: word.similarity(w), 
             reverse=True
)
print('\n'.join([w.orth_ for w in sym[:10]]))

 

Which will print out the following:

  • dog
  • DOG
  • Dog
  • dogs
  • DOGS
  • Dogs
  • puppy
  • Puppy
  • PUPPY
  • pet

As you can see, a lot of repetition. You can remove duplicates. Also be wary to set the upper bound of the sym object when reading.

So after you generate a group of sample synonyms, you end up with something like this.

Screen Shot 2017-09-09 at 18.50.07

Now it’s a simple matter of just generating a random set of questions from this table. You end up with something like:

  • need children purchasing dogs
  • can kids buying puppy
  • may child purchases dog
  • will children purchase dogs
  • will kids buy pets
  • make children cheap puppies
  • need children purchase puppies

As you can see, they are pretty bad. Not because of the word salad, but that you have a very narrow scope of what can be answered. But it can give you enough to have your intent trigger.

You can also mitigate this by creating a tensor of a number of similar questions, n-grams instead of single words, a custom domain dictionary and increasing your dictionary terms.

At the end of the day though, they are still going to be manufactured.

Anaphora? I hardly knew her.

One of common requests for conversation is being able to understand the running topic of a conversation.

For example:

USER: Can I feed my goldfish peas?

WATSON:Ā Goldfish love peas, but make sure to remove the shells!

USER: Should I boil them first?

The second response “them” is called an “anaphora”. The “them” refers to the peas. So you can’t answer the question without first knowing the previous question.

On the face of it, it looks easy. But you have “goldfish”, ‘peas”, ‘shells” which could potentially be theĀ reference, and no one wants to boil their goldfish!

So the tricky part is determining the topic. There are a number of ways to approach this.

Entities

The most obvious way is to determine what entity the person mentioned, and store that for use later. This works well if the user actually mentions an entity to work with. However in a general conversation, the subject of the conversation may not always be by the person who asks the question.

Intents

When asking a question and determining the intent, it may not always be that an entity can be involved. So this has limited help in this regards.

That said, there are certain cases where intents have been used with a context in mind. So it can be easily done by creating a suffix to the intent. For example:

    #FEEDING_FISH_e_Peas

In this case we believe that peas is a common entity that has a relationship to the intent of Feeding Fish. For coding convention we use “_e_” to denote that the following piece of the intent name is an entity identifier.

At the application layer, you can do a regex on the intent name “_e_(.*?)$” for the group 1 result. If it is not blank, store it in a context variable.

 

Regular Expressions

Like before, you can use regular expressions to capture an earlier pattern to store it at a later point.

One way to approach this is have a gateway node that activates before working through the intent tree. Something like this:

example1507

The downside to this is that there is a level of complexity to maintain in a complex regular expression.

You can make at least maintaining a little easier by setting the primary condition check as “true” and then individual checks in the node itself.

example1507a

Answer Units

An answer unit is the text response you give back to the end user. Once you have responded with an answer, you have created a lot of context within that answer that the user may follow up on. For example:

example1507b

Even with the context markers of the answer, the end user may never pick up on them. So it is very important to craft your answer that will drive the user to the context you have selected.

NLU

The last option is to pass the questions through NLU. This should be able to give you the key terms and phrases to store as context. As well as create knowledge graph information.

I have the context. Now what?

When the user gives a question that does not have context, you will normally get back low confidence intents, or irrelevant response.

If you are using Intent based context, you can check the returning intents for a similar context to what you have stored. This also allows you to discard unrelated intents. The results from this are not always stellar, but offer a cheaper one time call.

The other option you can take is to preload the question that was asked and send it back. For example:

 Ā PEAS !! Can I boil them first?

You can use the !! as a marker that your question is trying to determine context. Handy if you need to review the logs later.

As time passes…

So as the conversation presses on, what the person is talking about can move away from the original context, but it may still remain the dominant. One solution is to build a weighted context list.

For example:

"entity_list" : "peas, food, fish"

In this case we maintain the last three context found. As a new context is found, it uses LIFO to maintain the list. Of course this means more API calls, which can cost money.

Lowering calls on the tree.

Another option in this to create a poor mans knowledge graph. Let’s say the last two context were “bowl” and “peas”. Rather then creating multiple context nodes, you can build a tree which can be passed back to the application layer.

"entity" : "peas->food->care->fish"
...
"entity" : "bowl->care->fish"

You can use something like Tinkerpop to create a knowledge graph (IBM Graph in Bluemix is based on this).
example1507c

Now when a low confidence question is found, you can use “bowl”, “peas” to disambiguate, or use “care” as the common entity to find the answer.

Talk… like.. a… millennial…

One more common form of anaphora that you have to deal with, is how people talk on instant messaging systems.Ā The question is often split across multiple lines.

example1507d

Normal conversation systems take one entry, and give one response. So this just wreaks their AI head. Because not only do you need to know where the real question stops, but where the next one starts.

One way to approach this is capture the average timing mechanism between each entry of the user. You can do this by passing the timestamps from the client to the backend. The backend can then build an average of how the user talks. This needs to be done at the application layer.

Sadly no samples this time, but it should give you some insights into how context is worked with a conversation system.