But it's a preaching to the choir argument: Is there anything more to the argument than the intuition that automatic manipulation cannot create understanding? I think it can, though I have yet to show it.

Searle answers that exact question in his paper by saying "Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics." [Searle (1980)]

But I know of no definition of "comprehension" that is impossible to create using a program or a Chinese Room -- of course, I don't know /any/ complete definition of "comprehension," and maybe when I do, it will have the feature you believe it has.

I used to get hung up on this point as well -- but then I realized that the Chinese Room (as opposed to many current AGI programs) has no provisions for modifying itself or intentionality. This is why a Chinese Room will never be a strong AI but a program which does have goals/intentionality and the capability to learn and modify itself can.

   Mark

P.S. Thanks for the great clarity of thought and expression . . . . it made answering much easier.



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to