On Thu, Feb 4, 2016 at 2:42 PM, EdFromNH . <[email protected]> wrote:

> Ben in a piece he wrote several months ago about why deep learning systems
> often make recognition mistakes that seem totally unlike any type of
> recognition mistakes humans would make, pointed out that this might be due
> to the fact that current mechanical deep learning systems normally don't
> have the interaction across multiple sensory, behavioral, and feedback
> modalities at multiple different compositional and generalizational levels
> which would cause the representations learned by such systems to be better
> sculpted in higher experiential dimensions, like those in the brain.
>


That makes sense but I disagree with you because of a technical
reason. Deep Learning isn't able to integrate different -kinds- of events
even if they are expressed within the same modality when the events are too
different. So you could use deep learning on frame by frame examination of
different videos of cats getting out of a box and, (now that I think of
it), it might work, but it will not be able to map the concept of 'getting
out of the box' onto other kinds of 'getting out of' unless there were
strong visual similarities. It would be much easier to do this with some
kind of discrete references that were outside of what we are calling deep
learning. I know that you acknowledged that brain uses other techniques but
what I am saying is that if Deep Learning is not exactly how the brain
works (so to speak) then isn't it possible that natural learning is based
on a more integrated methodology? I mean it is not Deep Learning and
Discrete methods combined in a hybrid but maybe some kind of more subtle
and more powerful kind of network learning.

Jim Bromer

On Thu, Feb 4, 2016 at 2:42 PM, EdFromNH . <[email protected]> wrote:

> The brain clearly uses other techniques besides deep learning.  Danko's
> claim that human's can learn from one example, does not show the brain does
> not use deep learning.  It shows that the human brain's equivalent of deep
> learning networks are accompanied, among multiple other things, by an
> episodic memory system -- which record sequences of state activations
> across multiple levels in such deep learning networks.  Ben in a piece he
> wrote several months ago about why deep learning systems often make
> recognition mistakes that seem totally unlike any type of recognition
> mistakes humans would make, pointed out that this might be due to the fact
> that current mechanical deep learning systems normally don't have the
> interaction across multiple sensory, behavioral, and feedback modalities at
> multiple different compositional and generalizational levels which would
> cause the representations learned by such systems to be better sculpted in
> higher experiential dimensions, like those in the brain.
>
> On Thu, Feb 4, 2016 at 1:02 PM, Jim Bromer <[email protected]> wrote:
>
>> I mean that children have to learn to be able to 'understand' the concept
>> of a car and of all the features that might be identified as parts of a car
>> and, significantly relative to your example, a child has to have learned to
>> identify cartoons of objects or people-like beings from thousands and
>> thousands of exposures to cartoons. That aspect of the acquisition of that
>> ability does have something in common with Deep Learning.  Deep Learning
>> hasn't yet shown how different concepts might be integrated so that an AI
>> program could think outside of the box. That is something that I agree
>> with. It seems like Deep Learning has stretched narrow AI a little but it
>> hasn't been able to integrate concepts that rely on a lot of different
>> kinds of examples. Deep Learning can be used, for example, to detect images
>> of 'cats' it can be used to detect images of 'boxes' and it can be used to
>> detect images of 'cats in boxes'. But it cannot reliably detect images of
>> 'cats getting out of a box.'
>>
>> Jim Bromer
>>
>> On Thu, Feb 4, 2016 at 11:47 AM, Jim Bromer <[email protected]> wrote:
>>
>>> Danko,
>>> Although I am not opposing everything you said in this message to Ed,
>>> let me make one comment.
>>>
>>> You said,
>>> "1) A point of disagreement: As you correctly stated, deep learning
>>> requires "massive experientially connected experiential data". But this is
>>> not the case for humans. In contrast to deep learning, for human learning a
>>> single example is often just enough. For example, a child may play with one
>>> single toy car and after having played with that car, the child can
>>> recognize other cars much better than deep learning."
>>>
>>> This insight is not based on well thought out psychological research. I
>>> am going to assume that you understand the implication that I am trying to
>>> make because I have to go right now.
>>> Jim
>>>
>>>
>>> Jim Bromer
>>>
>>> On Thu, Feb 4, 2016 at 2:14 AM, Danko Nikolic <
>>> [email protected]> wrote:
>>>
>>>> Dear EdFromNH,
>>>>
>>>>   Allow me to disagree with and correct you regarding your following
>>>> statement about Searle:
>>>>
>>>> On 20/01/16 23:14, EdFromNH . wrote:
>>>>
>>>> One of the major philosophical advancements in understanding cognitive
>>>> computing is that through grounding  with massive experientially connected
>>>> experiential data syntax can, in fact, compute semantics.  The advances
>>>> being made in deep learning strongly support this.  For example, deep
>>>> learning indicates the visual meaning of a concept such as "cat", with all
>>>> of its rich possible visual variations can be understood by what Searle
>>>> calls a syntactical system.  If deep learning systems for vision were
>>>> connected with deep learning systems for hearing, touch, emotions, goals,
>>>> behaviors, etc, the combined system would have even a much richer
>>>> understanding of the meaning of a word such as "cat".
>>>>
>>>> So Searle's thinking is deeply flawed.
>>>>
>>>> I would like to add an argument that Searle's thinking is not deeply
>>>> flawed. I have two points at which I think that there is a flow in the
>>>> above argument:
>>>>
>>>> 1) A point of disagreement: As you correctly stated, deep learning
>>>> requires "massive experientially connected experiential data". But this is
>>>> not the case for humans. In contrast to deep learning, for human learning a
>>>> single example is often just enough. For example, a child may play with one
>>>> single toy car and after having played with that car, the child can
>>>> recognize other cars much better than deep learning. Moreover, a child can
>>>> easily recognize the following drawing as a car:
>>>>
>>>>
>>>>
>>>> even if the child has never seen this type of drawing before.
>>>>
>>>> The child has not been trained on thousands of examples of such
>>>> drawings. The child *understands* that this is a car because it
>>>> understands the concept of the car and the relationships between the
>>>> concept and the drawing.
>>>>
>>>> That is a huge difference to deep learning.
>>>>
>>>> (For more drawings of what only humans can do, see here:
>>>> http://ieet.org/index.php/IEET/more/nikolic20160108)
>>>>
>>>>
>>>> 2) A correction: Searle never said that that the following is not true:
>>>> "... data syntax can, in fact, address the problems of  semantics". To the
>>>> contrary, the whole thought experiment of Chinese room is about that:
>>>> syntax doing the job of semantics. Also, if you watch the mentioned talk at
>>>> Google, you will see that he is also giving examples of computer based
>>>> applications in which syntax computes semantics. He keeps pointing out over
>>>> and over: computers do the job of semantics by syntax.
>>>>
>>>> What he says is something else. His point that this is *not the way*
>>>> how biological mind/brain does it. Our minds/brains do it in a different
>>>> way.
>>>>
>>>> According to Searle, we do not yet understand how the brain does it.
>>>>
>>>> (My opinion: We finally now have a theory to begin understanding how
>>>> the brain does semantics -- which is the theory of practopoiesis:
>>>> http://www.sciencedirect.com/science/article/pii/S002251931500106X  )
>>>>
>>>> Best,
>>>>
>>>> Danko
>>>>
>>>>
>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to