On Sun, Jan 10, 2016 at 10:56 PM, LAU <[email protected]> wrote:
> There are few conference available with Jerome Pesenti, Vice President of
> Watson Core Technologie, who talks about techniques inside Watson.
>
> Jerome Pesenti said at a conference (at Paris Tech, a french engineering
> school, date unknown ~09/2015) that :
> - Watson did not use deep learning in the jeopardy version
> - But the system evolves continously, they are replacing many things in
> Watson by deep learning.
> He said that they are replacing codes in the jeopardy version by deep
> learning because it's much more efficient in natural language processing and
> others. With deep learning, there will soon be a version of jeopardy for
> other languages than English.
>

This is from
https://developer.ibm.com/watson/wp-content/uploads/sites/19/2013/11/The-Era-of-Cognitive-Systems-An-Inside-Look-at-IBM-Watson-and-How-it-Works1.pdf

Many natural language systems have attempted to emphasize precision
within the confines of

specific well-formed rules. For example, sentiment analysis often
looks for a set of specific

words and their synonyms within a social media site. These systems
then, without further

assessment of the context in which those words are being used, tally
the number of times

those words are co-located with some brand in the same phrase. For
example, it takes the

phrase, “… stopped by the IBM Donut Store for a coffee this morning,
it was great …” and

then asserts that the collocation of the brand name and the term
“great” are an indication of a

positive sentiment. However, consider if the rest of the phrase is,
“…, it was great to hear that

a new Fictional Coffee Shop is opening soon, so I am not tempted to
eat donuts every

morning.” Then, the system might miss that the sentiment is not about
the IBM Donut Store.

We call this concept shallow natural language processing (NLP)
because, although it might

be fairly precise within its more narrow focus, it is not very accurate.

However, it is also important to realize that shallow NLP actually has
an important role in

many systems. If your intent is to create a statistically relevant
assessment of sentiment

trends over huge quantities of information, the lack of accuracy for
each individual example is

likely not an issue. Assuming that there are approximately as many
false-positives as there

are false-negatives over a sufficiently large sample set, they cancel
each other out. And if the

pool of canceled tallies remains relatively constant across sample
sets over time, the

remaining uncanceled data yields statistically relevant trending
information. Thus, the

additional processing costs that are required for the additional
accuracy for any instance

might be unwarranted.

However, when the individual instances matter, the systems that are
designed to be precise

without focusing on high levels of accuracy tend to be brittle. That
is, they perform well within

the narrow parameters of their intended design, but they do not
perform well when those

parameters change…

IBM Watson is a deep NLP system. It achieves accuracy by attempting to
assess as much

context as possible. It gets that context both within the passage of
the question and from the

knowledge base (called a corpus) that is available to it for finding responses.

Shallow natural language processing can be fairly precise within its
more narrow focus,

but is not very accurate.

We are seeing a shift in construction techniques for natural language
processing when

accuracy is needed.


 When preparing for the quiz show, JEOPARDY!, Watson was asked the
following question

(clue) from the category Lincoln Blogs:

“Treasury Secy. Chase just submitted this to me for the third time -
guess what pal, this time

I'm accepting it.”

First, notice the abbreviation, “Secy.”, which had to be taken to mean
Secretary. Further notice

that Secretary is not meant here to be someone who takes dictation and
manages an

appointment book. The combined terms Treasury Secretary is significant
here as a noun and

a role. Therefore, to answer this question, Watson had to find a
passage that involved

submitting and accepting something between Treasury Secretary Chase
and Lincoln (the

category of the clue). However, also notice that the category does not
say “President Lincoln”

necessarily. The correct answer turned out to be “What is a resignation?”.

---------------
So this says that Watson is a Deep NLP System. It could mean that it
made many deep searches into the corpus of text in order to make
better contextual decisions based on fundamental statistics (like word
co-occurrence). There may have not been anything like multiple layers
of neural nets, but I doubt if the use of neural nets are a
requirement for 'Deep Learning'. So then whether Watson-Jeopardy used
somethin like Deep Learning (what I called deep learning with a little
d and a little l) boils down to the question of whether the NLP rules
for deriving the contextual decisions were designed by programmers or
whether any of them were derived by machine learning. Presuming that
some of the rules they used were derived from computer programs (not
explicitly designed by a some guy) then it boils down to the question
of whether Watson *itself* used machine learning to derive deep NLP
rules (of learning).


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to