http://www.kurzweilai.net/why-doesnt-my-phone-understand-me-yet?utm_source=KurzweilAI+Daily+Newsletter&utm_campaign=0481d44bf4-UA-946742-1&utm_medium=email&utm_term=0_6de721fb33-0481d44bf4-282058098


Raymond D. Roberts Jr.



-----Original Message-----
From: Jim Bromer <[email protected]>
To: AGI <[email protected]>
Sent: Thu, Jan 14, 2016 3:16 pm
Subject: Re: [agi] Re: If Deep Learning is It then Why Are Search Engines 
Incapable of Thinking (Outside the Box or Otherwise)?



I don't think that deep learning only applies to pure deep learning. It could 
be used as part of a system which is attuned to discovering relations. And it 
could return relationships (in language for example) which could then be 
evaluated. Supervised learning is part of machine and deep learning so a system 
which returns candidate samples that can be evaluated still could be classified 
as machine learning (or could be said to have something in common with machine 
learning).


Older AI paradigms have a much more fixed definition than newer ones. Comparing 
Watson, which was apparently able to learn new things about language, to an old 
Expert System does not sound right. My argument is that almost all contemporary 
AI paradigms involve some kind of network of relations so to presuppose that an 
advanced nlp program cannot 'learn' about nlp does not make sense. When I get 
some time I will ask someone at IBM what the phrase "Deep NLP" denotes. Does it 
mean deep search nlp or does it mean something that is closer to deep learning 
nlp because there is no reason to rule the possibility that new relationships 
in nlp could be detected in an applied network (of some kind) and then be used 
as an abstraction to search for other cases that might have a similar *kind* of 
relationship.


I am interested in what you said Ben but I get the sense that Watson was used 
to detect relationships in nlp which were then evaluated (probably in different 
ways.)




Jim Bromer



On Thu, Jan 14, 2016 at 10:30 AM, Ben Goertzel <[email protected]> wrote:

***
My question was why haven't there been clear advances in search engine 
technology in the 2 years since Deep Learning and Watson have made very obvious 
advances in AI?
***


The Web is very big.   Internally within Google, the API calls you can make 
against the whole Web are many fewer than the ones you can make against, say, 
Wikipedia


But business-wise, there is more $$ to be made in making crude searches against 
the whole Web slightly less crude, than in making more refined and intelligent 
searches against smaller text-bases...


Also,


-- Watson is basically an expert system, albeit a very clever one....  Expert 
system methods don't scale, not even fancy ones...


-- Deep learning in its current form works best for high-dimensional floating 
point data, not discrete data like text ....  Also, current deep learning 
algorithms rely essentially on bottom-up pattern recognition, with limited 
top-down feedback.  But real language understanding can't get approximated very 
well without sophisticated top-down feedback....  I.e., image and speech 
understanding can get further without cognitive feedback, than language 
understanding...




... ben


 





On Thu, Jan 14, 2016 at 11:22 PM, Jim Bromer <[email protected]> wrote:

http://www.ibm.com/blogs/think/2016/01/14/the-next-grand-challenge-computers-that-converse-like-people/



Jim Bromer



On Thu, Jan 14, 2016 at 10:20 AM, Jim Bromer <[email protected]> wrote:


I watched the Presenti presentation on youtube a few days ago.



Neural Networks can learn but they cannot use that learning efficiently in many 
important ways. Discrete AI can acquire more specific (discrete) 'objects' as 
they learn. So back in the 90's people started using hybrids that combined 
neural networks with discrete methods. Machine learning includes advances on 
hybrid methods.


Most discrete methods are built around networks of relations between the data 
objects which represent 'concepts' or 'ideas' or 'knowledge' or 'know how' or 
whatever it is that you want to call the data objects that would be used to 
hold knowledge in a (more) discrete AI program. So a contemporary discrete AI 
program is also going to be an implementation of a network. The network may 
include numerical values but even if it doesn't it probably will represent 
categories of association. That definition is not meant to be complete because 
I am only trying to get an idea across: Modern discrete AI methods involve 
network methods that can potentially be seen as representatives of 'thought' 
that are more sophisticated than neural networks. That makes sense.


Presenti was talking about an IBM researcher from the 70s who found that he 
could use statistical methods to *learn* about speech without a linguist. That 
would be a form of machine learning. Therefore it is fairly safe for me to 
conclude that Watson used machine learning in what Watson researcher's called, 
"Deep NLP".


My question was why haven't there been clear advances in search engine 
technology in the 2 years since Deep Learning and Watson have made very obvious 
advances in AI?  I did an image search for "cats" on google and it was very 
good.  I only found one dog (a small dog which had been photo shopped with 
multiple legs somewhat like a caterpillar.) I tried some other searches on 
images and the results were also very good. The results were really amazing. So 
there have been some advances on image searches in the past 2 years. The search 
for "castles on the moon" did not distinguish between castles pictured as being 
on the moon from castles with the moon in the scene. So even though I am 
nit-picking to some extent the point is that it looks like you have to train a 
deep learning neural network with a narrow training sample in order to teach it 
to recognize something that would require a little thinking outside the box. 
That was also a problem with Watson. Its Deep NLP could be trained with all the 
questions from past Jeopardy shows (and Jeopardy-style questions that 
researchers could create) but can it be trained to handle juxtapositions of 
linguistic 'concepts' that might require some thinking outside of the box? 
(Incidentally I tried "cat in a box" and Google did very well. But when I tried 
"full stadium" it did include pictures of stadiums that were not empty. I could 
spot them as I was paging quickly through the images.) But I guess I there have 
been some significant advances in the past 2 years. They just do not include 
using language to refine your searches.


My idea of Concept Integration is that different concepts cannot always be 
merged, as in a neural network to take an example, because as more concepts are 
integrated the requirements of a part of the conceptual integration may change. 
To restate that in another way,the integration of a number of concepts will 
typically change if additional concepts are integrated with them. This is what 
would happen if you tried to refine your search using conversation.





Jim Bromer





On Tue, Jan 12, 2016 at 10:21 PM, LAU <> wrote:


OK, as you wish ... It's just a word.      We do not agree on the 
signification. But, it's OK. If you call it      "deep learning", or 
"conceptual learning" ... or "Hakuna Matata's      learning", it's not 
important. Stop playing with words.

      Try to back to the topic of this thread. If I understand what you      
want to promote :
      1) You note that deep learning implemented in the industry is not      so 
intelligent than espected taking in account the computation      power 
available.
      2) Watson seems to less "narrow" than other implementations.
      3) What it miss there is "conceptual integration".
      Correct me if I'm wrong.


      In my humble opinion, there's no intelligent machines just because      
people don't try to, or most likely don't figure out how to, make      it more 
intelligent.
      Implementing"conceptual integration" is certainly a way that some      
researchers tried, but lead to no significant results until now.      If I look 
at wikipedia, the theory is dated from 1990s. Twenty      years later, still 
nothing.

      There's no magic behind deep learning, I mean neural networks,      used 
by Google or Facebook. Very roughly, it's just a kind of      "universal 
approximator". And it's not the computation power that      with make it 
spontaneously more intelligent.
      Deep learning becomes very popular these last years because it's      
easier to make a neural network to accomplish picture or voice      recognition 
task (I've made a small one myself from scratch in        few days) than 
handcrafted codes, and for a better result.
      But basically, a neural network is just another kind of      programming. 
Instead of coding a multitude of operation to achieve      a complex task, a 
neural network can do it itself by learning from      examples. 
      And the question will be how to teach a neural network what is      
"conceptual integration" ?

      In the paris tech conference video (on youtube, but it's in        french 
...), Jerome Pesenti said something else interesting.      He cite a IBM's 70s 
researcher, Fred Jelinek, who said "Every        time I fire a linguist the 
performance of the speech recognizer        goes up". The Jelinek speech 
recognizer team was composed by      part of linguist and engineers. By 
replacing a linguist who treats      language as a human do by an engineer who 
does mathematics and      statistics on words, the result is better. And it 
seems to be the      philosophy at IBM to work differently that a human do, and 
it      seems to give better result. Instead of playing jeopardy in human       
 way, watson applies statistics on the database (which        was wikipedia).

      What I want to say is that may be the "conceptual integration" is      a 
track to explore for building AGI. Or, may be the solution will      come from 
elsewhere.


      LAU



      Le 12/01/2016 10:46, Jim Bromer a écrit :



Deep Learning is Deep Machine Learning and Machine Learning is in no
way limited to Neural Networks. So there is no way that Deep Learning
is going to be forever defined to refer Machine Learning that uses
Neural Networks (in certain ways). From that point of view I can say
that Watson-Jeopardy probably did use a kind of deep learning.


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/27172223-36de8e6c
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com






      AGI | Archives | Modify Your Subscription











      AGI | Archives | Modify Your Subscription










-- 

Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one persists 
in trying to adapt the world to himself. Therefore all progress depends on the 
unreasonable man." -- George Bernard Shaw




      AGI | Archives | Modify Your Subscription








      AGI | Archives | Modify Your Subscription







-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to