Dear EdFromNH,

Allow me to disagree with and correct you regarding your following statement about Searle:

On 20/01/16 23:14, EdFromNH . wrote:
One of the major philosophical advancements in understanding cognitive computing is that through grounding with massive experientially connected experiential data syntax can, in fact, compute semantics. The advances being made in deep learning strongly support this. For example, deep learning indicates the visual meaning of a concept such as "cat", with all of its rich possible visual variations can be understood by what Searle calls a syntactical system. If deep learning systems for vision were connected with deep learning systems for hearing, touch, emotions, goals, behaviors, etc, the combined system would have even a much richer understanding of the meaning of a word such as "cat".

So Searle's thinking is deeply flawed.
I would like to add an argument that Searle's thinking is not deeply flawed. I have two points at which I think that there is a flow in the above argument:

1) A point of disagreement: As you correctly stated, deep learning requires "massive experientially connected experiential data". But this is not the case for humans. In contrast to deep learning, for human learning a single example is often just enough. For example, a child may play with one single toy car and after having played with that car, the child can recognize other cars much better than deep learning. Moreover, a child can easily recognize the following drawing as a car:



even if the child has never seen this type of drawing before.

The child has not been trained on thousands of examples of such drawings. The child /understands/ that this is a car because it understands the concept of the car and the relationships between the concept and the drawing.

That is a huge difference to deep learning.

(For more drawings of what only humans can do, see here:
http://ieet.org/index.php/IEET/more/nikolic20160108)


2) A correction: Searle never said that that the following is not true: "... data syntax can, in fact, address the problems of semantics". To the contrary, the whole thought experiment of Chinese room is about that: syntax doing the job of semantics. Also, if you watch the mentioned talk at Google, you will see that he is also giving examples of computer based applications in which syntax computes semantics. He keeps pointing out over and over: computers do the job of semantics by syntax.

What he says is something else. His point that this is /not the way/ how biological mind/brain does it. Our minds/brains do it in a different way.

According to Searle, we do not yet understand how the brain does it.

(My opinion: We finally now have a theory to begin understanding how the brain does semantics -- which is the theory of practopoiesis:
http://www.sciencedirect.com/science/article/pii/S002251931500106X )

Best,

Danko





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to