I did not get confused, I was trying to make the point that someone
like Searle seems to refer to these strong dichotomies in his
theories, but Kurzweil did not as far as I could tell.  I happened to
agree with Kurzweil so perhaps my opinion is based on the common
ground I seem to have with Kurzweil. On the other hand, I do agree
with a lot of the things that Searle was saying in the video I started
to watch.

I do think that there are a lot of interesting things happening and
developing hybrid systems might be a good way to go about it. On the
other hand, I do think that there are other kinds of networks - like a
conceptual network - which should be able to do some of the things
that Deep Learning is doing now. A Neural Network is a simple kind of
network (although there can be different variations) so I guess that
it may be the quickest way to get some fast results using
network-related methods. But although a Conceptual Network would be
inherently more problematic I do think that it holds the greatest
potential.

Jim Bromer


On Tue, Jan 19, 2016 at 6:11 PM, Mike Archbold <[email protected]> wrote:
> Jim,
>
> Sorry, we got tangled up in different references....  I was commenting
> on a link to "kurzweilaI" about "why my phone doesn't understand me"
> posted by Dr Roberts Jr, but I think you were commenting on a talk by
> Kurzweil which I have not seen.
>
> Anyway my only point in all this is that while deep learning
> considered in isolation may not be tantamount to AGI it can still be
> used in an overall architecture.   Everybody realized this, I think,
> but there is nothing lost in emphasizing it and not get caught in the
> trap of "either this technique solves AGI *OR* we are not making
> progress toward AGI."
>
> Mike
>
> On 1/18/16, Jim Bromer <[email protected]> wrote:
>> Mike Said:
>> The statement:
>>
>> “Apple’s Siri focuses on statistical regularities, but communication
>> is not about statistical regularities,” he said. “Statistical
>> regularities may get you far, but it is not how the brain does it. In
>> order for computers to communicate with us, they would need a
>> cognitive architecture that continuously captures and updates the
>> conceptual space shared with their communication partner during a
>> conversation.”
>>
>> Well, to me that sounds a bit like a false dichotomy.  We could use
>> both (what he calls) statistical regularities  working within a
>> cognitive architecture framework.
>>
>> Mike
>> -------------------------------------------------
>> Kurzweil said that computers would need x to communicate with someone
>> during conversation. He did not say that an assessment of statistical
>> regularities would need to be excluded to achieve this, so I did not
>> see his statement as a dichotomy other than to say that statistical
>> regularities were not enough.
>>
>> When I first started listening to the Searle Google Talk I thought
>> that he was not going to be so iconoclastic about his one main issue.
>> I agree with him about a lot of the things he said. Computers are
>> syntactic, we do not understand how consciousness works. His comments
>> did help me to rethink my plans a little in a way that might lead to
>> more practical results. However, I realized that we have a fundamental
>> difference because he thinks that the human brain is not a syntactic
>> device. Furthermore  using Searle's dichotomy we can say that human
>> consciousness is (subjective) observer relative. In spite of the fact
>> that we do not know how the mind produces consciousness and regardless
>> of the mysteriousness of conscious experience, our experience of
>> consciousness is still relative to our observation (just as our
>> feelings that a computer is doing thinking when it does some
>> computation is relative.)
>>
>> I don't want to spend the time to get more precise quotes from the
>> Searle talk, but, using my recall Searle started by talking about the
>> dichotomy of epistemological knowledge and ontological knowledge.
>> Ontological knowledge is knowledge that comes from existence and
>> epistemological knowledge is more like knowledge that has been written
>> down or derived from higher abstractions. And he says, even though he
>> realizes that computers can do amazing things, that the computer is
>> syntactic but it does not have any semantic knowledge about anything.
>> I do not agree that Searle's dichotomies are absolute. Syntactic
>> knowledge does contain some semantic information and we can represent
>> semantic knowledge using syntax. Epistemological knowledge can be used
>> to encode ontological knowledge. And the brain is a syntactic device.
>>
>> So I think that Searle's dichotomies are excessive even though I agree
>> with some of his view points and I feel that they can be used to help
>> produce more effective results. In contrast, I don't see a dichotomy
>> in what Kurzweil was saying unless you are saying that the observation
>> and utilization of statistical regularities is enough to produce a
>> cognitive architecture capable of true conversation between computers
>> and people.
>> Jim Bromer
>>
>>
>> On Mon, Jan 18, 2016 at 7:00 PM, Mike Archbold <[email protected]> wrote:
>>> On 1/17/16, Jim Bromer <[email protected]> wrote:
>>>> The article by Kurzweil seemed to be insightful.
>>>>
>>>
>>>
>>> To me it sounded like another take on the combinatorial explosion
>>> issue, which is well known, coming from the angle of the observed
>>> neural structure of context.
>>>
>>> The statement:
>>>
>>> “Apple’s Siri focuses on statistical regularities, but communication
>>> is not about statistical regularities,” he said. “Statistical
>>> regularities may get you far, but it is not how the brain does it. In
>>> order for computers to communicate with us, they would need a
>>> cognitive architecture that continuously captures and updates the
>>> conceptual space shared with their communication partner during a
>>> conversation.”
>>>
>>> Well, to me that sounds a bit like a false dichotomy.  We could use
>>> both (what he calls) statistical regularities  working within a
>>> cognitive architecture framework.
>>>
>>> Mike
>>>
>>>
>>>
>>>
>>>> Jim Bromer
>>>>
>>>> On Thu, Jan 14, 2016 at 3:24 PM, Raymond D Roberts Jr. via AGI <
>>>> [email protected]> wrote:
>>>>
>>>>>
>>>>> http://www.kurzweilai.net/why-doesnt-my-phone-understand-me-yet?utm_source=KurzweilAI+Daily+Newsletter&utm_campaign=0481d44bf4-UA-946742-1&utm_medium=email&utm_term=0_6de721fb33-0481d44bf4-282058098
>>>>>
>>>>> Raymond D. Roberts Jr.
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: Jim Bromer <[email protected]>
>>>>> To: AGI <[email protected]>
>>>>> Sent: Thu, Jan 14, 2016 3:16 pm
>>>>> Subject: Re: [agi] Re: If Deep Learning is It then Why Are Search
>>>>> Engines
>>>>> Incapable of Thinking (Outside the Box or Otherwise)?
>>>>>
>>>>> I don't think that deep learning only applies to pure deep learning. It
>>>>> could be used as part of a system which is attuned to discovering
>>>>> relations. And it could return relationships (in language for example)
>>>>> which could then be evaluated. Supervised learning is part of machine
>>>>> and
>>>>> deep learning so a system which returns candidate samples that can be
>>>>> evaluated still could be classified as machine learning (or could be
>>>>> said
>>>>> to have something in common with machine learning).
>>>>>
>>>>> Older AI paradigms have a much more fixed definition than newer ones.
>>>>> Comparing Watson, which was apparently able to learn new things about
>>>>> language, to an old Expert System does not sound right. My argument is
>>>>> that
>>>>> almost all contemporary AI paradigms involve some kind of network of
>>>>> relations so to presuppose that an advanced nlp program cannot 'learn'
>>>>> about nlp does not make sense. When I get some time I will ask someone
>>>>> at
>>>>> IBM what the phrase "Deep NLP" denotes. Does it mean deep search nlp or
>>>>> does it mean something that is closer to deep learning nlp because
>>>>> there
>>>>> is
>>>>> no reason to rule the possibility that new relationships in nlp could
>>>>> be
>>>>> detected in an applied network (of some kind) and then be used as an
>>>>> abstraction to search for other cases that might have a similar *kind*
>>>>> of
>>>>> relationship.
>>>>>
>>>>> I am interested in what you said Ben but I get the sense that Watson
>>>>> was
>>>>> used to detect relationships in nlp which were then evaluated (probably
>>>>> in
>>>>> different ways.)
>>>>>
>>>>> Jim Bromer
>>>>>
>>>>> On Thu, Jan 14, 2016 at 10:30 AM, Ben Goertzel <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> ***
>>>>>> My question was why haven't there been clear advances in search engine
>>>>>> technology in the 2 years since Deep Learning and Watson have made
>>>>>> very
>>>>>> obvious advances in AI?
>>>>>> ***
>>>>>>
>>>>>> The Web is very big.   Internally within Google, the API calls you can
>>>>>> make against the whole Web are many fewer than the ones you can make
>>>>>> against, say, Wikipedia
>>>>>>
>>>>>> But business-wise, there is more $$ to be made in making crude
>>>>>> searches
>>>>>> against the whole Web slightly less crude, than in making more refined
>>>>>> and
>>>>>> intelligent searches against smaller text-bases...
>>>>>>
>>>>>> Also,
>>>>>>
>>>>>> -- Watson is basically an expert system, albeit a very clever one....
>>>>>> Expert system methods don't scale, not even fancy ones...
>>>>>>
>>>>>> -- Deep learning in its current form works best for high-dimensional
>>>>>> floating point data, not discrete data like text ....  Also, current
>>>>>> deep
>>>>>> learning algorithms rely essentially on bottom-up pattern recognition,
>>>>>> with
>>>>>> limited top-down feedback.  But real language understanding can't get
>>>>>> approximated very well without sophisticated top-down feedback....
>>>>>> I.e.,
>>>>>> image and speech understanding can get further without cognitive
>>>>>> feedback,
>>>>>> than language understanding...
>>>>>>
>>>>>>
>>>>>> ... ben
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Jan 14, 2016 at 11:22 PM, Jim Bromer <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>> http://www.ibm.com/blogs/think/2016/01/14/the-next-grand-challenge-computers-that-converse-like-people/
>>>>>>>
>>>>>>> Jim Bromer
>>>>>>>
>>>>>>> On Thu, Jan 14, 2016 at 10:20 AM, Jim Bromer <[email protected]>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I watched the Presenti presentation on youtube a few days ago.
>>>>>>>>
>>>>>>>> Neural Networks can learn but they cannot use that learning
>>>>>>>> efficiently
>>>>>>>> in many important ways. Discrete AI can acquire more specific
>>>>>>>> (discrete) 'objects' as they learn. So back in the 90's people
>>>>>>>> started
>>>>>>>> using hybrids that combined neural networks with discrete methods.
>>>>>>>> Machine
>>>>>>>> learning includes advances on hybrid methods.
>>>>>>>>
>>>>>>>> Most discrete methods are built around networks of relations between
>>>>>>>> the data objects which represent 'concepts' or 'ideas' or
>>>>>>>> 'knowledge'
>>>>>>>> or
>>>>>>>> 'know how' or whatever it is that you want to call the data objects
>>>>>>>> that
>>>>>>>> would be used to hold knowledge in a (more) discrete AI program. So
>>>>>>>> a
>>>>>>>> contemporary discrete AI program is also going to be an
>>>>>>>> implementation
>>>>>>>> of a
>>>>>>>> network. The network may include numerical values but even if it
>>>>>>>> doesn't
>>>>>>>> it probably will represent categories of association. That
>>>>>>>> definition
>>>>>>>> is
>>>>>>>> not meant to be complete because I am only trying to get an idea
>>>>>>>> across:
>>>>>>>> Modern discrete AI methods involve network methods that can
>>>>>>>> potentially
>>>>>>>> be
>>>>>>>> seen as representatives of 'thought' that are more sophisticated
>>>>>>>> than
>>>>>>>> neural networks. That makes sense.
>>>>>>>>
>>>>>>>> Presenti was talking about an IBM researcher from the 70s who found
>>>>>>>> that he could use statistical methods to *learn* about speech
>>>>>>>> without
>>>>>>>> a
>>>>>>>> linguist. That would be a form of machine learning. Therefore it is
>>>>>>>> fairly
>>>>>>>> safe for me to conclude that Watson used machine learning in what
>>>>>>>> Watson
>>>>>>>> researcher's called, "Deep NLP".
>>>>>>>>
>>>>>>>> My question was why haven't there been clear advances in search
>>>>>>>> engine
>>>>>>>> technology in the 2 years since Deep Learning and Watson have made
>>>>>>>> very
>>>>>>>> obvious advances in AI?  I did an image search for "cats" on google
>>>>>>>> and
>>>>>>>> it
>>>>>>>> was very good.  I only found one dog (a small dog which had been
>>>>>>>> photo
>>>>>>>> shopped with multiple legs somewhat like a caterpillar.) I tried
>>>>>>>> some
>>>>>>>> other
>>>>>>>> searches on images and the results were also very good. The results
>>>>>>>> were
>>>>>>>> really amazing. So there have been some advances on image searches
>>>>>>>> in
>>>>>>>> the
>>>>>>>> past 2 years. The search for "castles on the moon" did not
>>>>>>>> distinguish
>>>>>>>> between castles pictured as being on the moon from castles with the
>>>>>>>> moon in
>>>>>>>> the scene. So even though I am nit-picking to some extent the point
>>>>>>>> is
>>>>>>>> that
>>>>>>>> it looks like you have to train a deep learning neural network with
>>>>>>>> a
>>>>>>>> narrow training sample in order to teach it to recognize something
>>>>>>>> that
>>>>>>>> would require a little thinking outside the box. That was also a
>>>>>>>> problem
>>>>>>>> with Watson. Its Deep NLP could be trained with all the questions
>>>>>>>> from
>>>>>>>> past
>>>>>>>> Jeopardy shows (and Jeopardy-style questions that researchers could
>>>>>>>> create)
>>>>>>>> but can it be trained to handle juxtapositions of linguistic
>>>>>>>> 'concepts'
>>>>>>>> that might require some thinking outside of the box? (Incidentally I
>>>>>>>> tried
>>>>>>>> "cat in a box" and Google did very well. But when I tried "full
>>>>>>>> stadium" it
>>>>>>>> did include pictures of stadiums that were not empty. I could spot
>>>>>>>> them
>>>>>>>> as
>>>>>>>> I was paging quickly through the images.) But I guess I there have
>>>>>>>> been
>>>>>>>> some significant advances in the past 2 years. They just do not
>>>>>>>> include
>>>>>>>> using language to refine your searches.
>>>>>>>>
>>>>>>>> My idea of Concept Integration is that different concepts cannot
>>>>>>>> always
>>>>>>>> be merged, as in a neural network to take an example, because as
>>>>>>>> more
>>>>>>>> concepts are integrated the requirements of a part of the conceptual
>>>>>>>> integration may change. To restate that in another way,the
>>>>>>>> integration
>>>>>>>> of a
>>>>>>>> number of concepts will typically change if additional concepts are
>>>>>>>> integrated with them. This is what would happen if you tried to
>>>>>>>> refine
>>>>>>>> your
>>>>>>>> search using conversation.
>>>>>>>>
>>>>>>>> Jim Bromer
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Jan 12, 2016 at 10:21 PM, LAU <> wrote:
>>>>>>>>
>>>>>>>>> OK, as you wish ... It's just a word. We do not agree on the
>>>>>>>>> signification. But, it's OK. If you call it "deep learning", or
>>>>>>>>> "conceptual
>>>>>>>>> learning" ... or "Hakuna Matata's learning", it's not important.
>>>>>>>>> Stop
>>>>>>>>> playing with words.
>>>>>>>>>
>>>>>>>>> Try to back to the topic of this thread. If I understand what you
>>>>>>>>> want
>>>>>>>>> to promote :
>>>>>>>>> 1) You note that deep learning implemented in the industry is not
>>>>>>>>> so
>>>>>>>>> intelligent than espected taking in account the computation power
>>>>>>>>> available.
>>>>>>>>> 2) Watson seems to less "narrow" than other implementations.
>>>>>>>>> 3) What it miss there is "conceptual integration".
>>>>>>>>> Correct me if I'm wrong.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> In my humble opinion, there's no intelligent machines just because
>>>>>>>>> people don't try to, or most likely don't figure out how to, make
>>>>>>>>> it
>>>>>>>>> more
>>>>>>>>> intelligent.
>>>>>>>>> Implementing"conceptual integration" is certainly a way that some
>>>>>>>>> researchers tried, but lead to no significant results until now. If
>>>>>>>>> I
>>>>>>>>> look
>>>>>>>>> at wikipedia, the theory is dated from 1990s. Twenty years later,
>>>>>>>>> still
>>>>>>>>> nothing.
>>>>>>>>>
>>>>>>>>> There's no magic behind deep learning, I mean neural networks, used
>>>>>>>>> by
>>>>>>>>> Google or Facebook. Very roughly, it's just a kind of "universal
>>>>>>>>> approximator". And it's not the computation power that with make it
>>>>>>>>> spontaneously more intelligent.
>>>>>>>>> Deep learning becomes very popular these last years because it's
>>>>>>>>> easier to make a neural network to accomplish picture or voice
>>>>>>>>> recognition
>>>>>>>>> task (*I've made a small one myself from scratch in few days*) than
>>>>>>>>> handcrafted codes, and for a better result.
>>>>>>>>> But basically, a neural network is just another kind of
>>>>>>>>> programming.
>>>>>>>>> Instead of coding a multitude of operation to achieve a complex
>>>>>>>>> task,
>>>>>>>>> a
>>>>>>>>> neural network can do it itself by learning from examples.
>>>>>>>>> And the question will be how to teach a neural network what is
>>>>>>>>> "conceptual integration" ?
>>>>>>>>>
>>>>>>>>> In the paris tech conference video (*on youtube, but it's in french
>>>>>>>>> ...*), Jerome Pesenti said something else interesting. He cite a
>>>>>>>>> IBM's 70s researcher, Fred Jelinek, who said "*Every time I fire a
>>>>>>>>> linguist the performance of the speech recognizer goes up*". The
>>>>>>>>> Jelinek speech recognizer team was composed by part of linguist and
>>>>>>>>> engineers. By replacing a linguist who treats language as a human
>>>>>>>>> do
>>>>>>>>> by an
>>>>>>>>> engineer who does mathematics and statistics on words, the result
>>>>>>>>> is
>>>>>>>>> better. And it seems to be the philosophy at IBM to work
>>>>>>>>> differently
>>>>>>>>> that a
>>>>>>>>> human do, and it seems to give better result. Instead of playing
>>>>>>>>> jeopardy
>>>>>>>>> in human way, watson applies statistics on the database (*which was
>>>>>>>>> wikipedia*).
>>>>>>>>>
>>>>>>>>> What I want to say is that may be the "conceptual integration" is a
>>>>>>>>> track to explore for building AGI. Or, may be the solution will
>>>>>>>>> come
>>>>>>>>> from
>>>>>>>>> elsewhere.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> LAU
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Le 12/01/2016 10:46, Jim Bromer a écrit :
>>>>>>>>>
>>>>>>>>> Deep Learning is Deep Machine Learning and Machine Learning is in
>>>>>>>>> no
>>>>>>>>> way limited to Neural Networks. So there is no way that Deep
>>>>>>>>> Learning
>>>>>>>>> is going to be forever defined to refer Machine Learning that uses
>>>>>>>>> Neural Networks (in certain ways). From that point of view I can
>>>>>>>>> say
>>>>>>>>> that Watson-Jeopardy probably did use a kind of deep learning.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -------------------------------------------
>>>>>>>>> AGI
>>>>>>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>>>>>>> RSS Feed:
>>>>>>>>> https://www.listbox.com/member/archive/rss/303/27172223-36de8e6c
>>>>>>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>>>>>>> Powered by Listbox: http://www.listbox.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *AGI* | Archives | Modify Your Subscription
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>> *AGI* | Archives | Modify Your Subscription
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Ben Goertzel, PhD
>>>>>> http://goertzel.org
>>>>>>
>>>>>> "The reasonable man adapts himself to the world: the unreasonable one
>>>>>> persists in trying to adapt the world to himself. Therefore all
>>>>>> progress
>>>>>> depends on the unreasonable man." -- George Bernard Shaw
>>>>>> *AGI* | Archives | Modify Your Subscription
>>>>>>
>>>>>
>>>>> *AGI* | Archives | Modify Your Subscription
>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>>>>> Modify
>>>>> <https://www.listbox.com/member/?&;>
>>>>> Your Subscription <http://www.listbox.com>
>>>>>
>>>>
>>>>
>>>>
>>>> -------------------------------------------
>>>> AGI
>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>> RSS Feed:
>>>> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>>>> Modify Your Subscription:
>>>> https://www.listbox.com/member/?&;
>>>> Powered by Listbox: http://www.listbox.com
>>>>
>>>
>>>
>>> -------------------------------------------
>>> AGI
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed:
>>> https://www.listbox.com/member/archive/rss/303/24379807-653794b5
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to