202 Lotus blogs updated hourly. Who will post next? Home | Blogs | Search | About 
 
Latest 7 Posts
Testing your intents
Sun, Dec 3rd 2017 18
Pushing My Buttons
Tue, Oct 24th 2017 8
Chihuahua or Muffin, revisited.
Thu, Sep 28th 2017 4
I have no confidence in Entities.
Sun, Sep 10th 2017 4
Manufacturing Intent
Sat, Sep 9th 2017 5
Anaphora? I hardly knew her.
Sat, Jul 15th 2017 4
Removing the confusion in intents.
Fri, Jul 7th 2017 4
Top 10
Testing your intents
Sun, Dec 3rd 2017 18
Message shaping
Sun, Oct 30th 2016 10
Clean Slate
Fri, Sep 2nd 2016 8
Pushing My Buttons
Tue, Oct 24th 2017 8
Do you even need a chat bot?
Mon, Sep 5th 2016 7
What is your name?
Sun, Sep 11th 2016 7
Migrating from NLC and Dialog
Sun, Oct 2nd 2016 7
The road to good intentions.
Sun, Oct 16th 2016 7
Using counters in Conversation.
Thu, Sep 22nd 2016 6
Handling low confidence answers in Conversation.
Mon, Sep 5th 2016 5


2 Months in.
Twitter Google+ Facebook LinkedIn Addthis Email Gmail Flipboard Reddit Tumblr WhatsApp StumbleUpon Yammer Evernote Delicious
simonodoherty    

So I have been two months in the new role. Most of that has been educational training (Littleton, Raleigh as well as local). It appears be a continual learning process, but I am slowly starting to work on customer related stuff.

So what am I doing?

I work in Lab Services for Watson Engagement Advisor (WEA). WEA is a version of Watson that you teach to become a domain expert. You do this by feeding it in unstructured data and then teaching it to understand the material. Once you teach it correctly, it’s almost scary at how well it can understand what you are asking.

So my role is pretty much doing that for the customers, until the customers understand how to use the system and do it themselves.

It’s been more tricky trying to explain what Watson is to family, friends, co-workers. Seeing as the internet likes lists, I thought I’d put up the most common issues I come across explaining.

#1 Watson is not self aware.

It’s not HAL, Skynet nor the Machine. If you are going by sci-fi interpretations, it would be closer to a Virtual Intelligence from Mass Effect.

#2 Watson is many products.

Most people see Watson as a single entity, but it is in fact many products. They are all related to the cognitive sciences, and have different functions.

For example WEA is more useful in areas where you have limited domain experts, or the ability to become a domain expert is tricky (eg. high turnover, material always updating).

In places where you may have many domain experts then something like Watson Discovery Advisor (WDA) is more useful. This system is less about getting an answer, and more about being interested in the evidence that brought you to that answer. It also allows you to give an answer to your question. WDA then researches what evidence exists to support your answer.

There are many more products and I couldn’t do them justice in a single blog post. In fact it took a full week in Raleigh just to get a high level overview of the main products in Watson.

#3 Watson is not a search engine.

This seems the hardest to grasp for many. Watson is no more a search engine then a human is. It uses a search engine like a human does.

When you are dealing with a search engine, you put in your terms and then get your results. You expect a 100% return as it is a simple* keyword match. *(simple in comparison to Watson)

With Watson it will try to understand exactly what you are asking. Once it determines this, then it searches for the results and analyses them for what it believes is the correct answer. Once it has it’s set of possible answers, it will then research them to determine if what it believes is true actually is. Only then it returns the answer/s.

#4 You need to stop thinking it is like a computer.flyovertheboatwiththeredbow

As a follow on from point three. Because it is a learning cognitive system, what you expect from a computer is not what you would expect from Watson. Take the picture to the right as an example. Depending on the knowledge of the person (or Watson) the answer you get back can change.

The same would apply if you had a person with a photographic memory. They may be able to respond verbatim (like a search engine). However if you teach them the material, then over time how they will respond to the same question will change as they learn how to apply the knowledge they have previously memorized.

This concept got me for a while (and in one training session, it was good to see I wasn’t alone). We were shown where Watson failed to understand a sentence (ambiguous idiom). I asked if it was the job to correct Watson every time we see variations of this, or if the developers needed to fix the code. The response was pretty much what I said above. Watson would pick up on this and learn how to correctly respond once you teach it. So it wasn’t a case of having to figure out every variation of odd sentences.

#5 It’s not your parents IBM.

Watson is also a division within IBM. At one point in time IBM had some very strict rules. By today’s standards the past seems a bit too strict. Watson feels that way compared to the rest of IBM. It feels more like a start up. Even some things IBM’ers take for granted are changed. For example no PBC’s.

 




---------------------
http://sodoherty.com/2014/07/03/2-months-in/
Jul 03, 2014
3 hits



Recent Blog Posts
18
Testing your intents
Sun, Dec 3rd 2017 4:05p   Simon O'Doherty
So this really only helps if you are doing a large number of intents, and you have not used entities as your primary method of determining intent. First lets talk about perceived accuracy, and what this is trying to solve. Perceived accuracy is where someone will type in a few questions they know the answer to. Then depending on their manual test they perceive the system to be working or failing. It puts the person training the system into a false sense of how it is performing. If you have do
8
Pushing My Buttons
Tue, Oct 24th 2017 4:50p   Simon O'Doherty
A
4
Chihuahua or Muffin, revisited.
Thu, Sep 28th 2017 1:25p   Simon O'Doherty
A
4
I have no confidence in Entities.
Sun, Sep 10th 2017 4:51p   Simon O'Doherty
A
5
Manufacturing Intent
Sat, Sep 9th 2017 3:10p   Simon O'Doherty
A
4
Anaphora? I hardly knew her.
Sat, Jul 15th 2017 9:21a   Simon O'Doherty
A
4
Removing the confusion in intents.
Fri, Jul 7th 2017 3:59p   Simon O'Doherty
A
2
To see the world in a grain of sand…
Fri, Jul 7th 2017 2:04p   Simon O'Doherty
A
1
Watson Conversation just got turned up to 11.
Fri, Jun 23rd 2017 6:38a   Simon O'Doherty
A
1
I love Pandas!
Wed, Apr 19th 2017 10:55a   Simon O'Doherty
A




Created and Maintained by Yancy Lent - About - Planet Lotus Blog - Advertising - Mobile Edition