The Chinese Room concept became more palatable to me when I started
putting the emphasis on "nese" and not on "room". /Chinese/ Room, not
Chinese /Room/. I don't know why this is.

I think it changes the implied meaning from a room where Chinese
happens to be spoken, to a room for the speaking/production of
Chinese. Think of it as a function call... chinese(). I'm not sure it
describes a great deal more than this. A chain or network of Chinese
rooms with sensors at one end and motor effectors at the other could
be entirely capable of understanding, I think.



On 8/4/08, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Hi Harry,
>
> All the Chinese Room argument shows, if you accept the arguments, is that
> approaches to AI in which symbols are *given*, cannot manifest understanding
> (aka an internal sense of meaning) from the perspective of the AI.  By
> given, I mean simply that symbols are incorporated into the programming, or
> hard-coded. All purely logical or algorithmic approaches to AI involve
> symbols that are given, and so suffer from this critique.
>
> There are at least two ways around this (besides denying Searle's argument).
> One is to deny that "understanding" is necessary for an AGI, and some folks
> do take that position, although it seems untenable to me.
>
> The other is to adopt an approach to building an AI in which no symbols are
> given. Instead, symbols are acquired in runtime, and refer not to some
> external entity but are internally structured in terms of the AI's ongoing
> experience. I won't bother to define "ongoing experience" unless someone
> asks me to, at the risk of putting people to sleep.
>
> Terren
>
>
> --- On Mon, 8/4/08, Harry Chesley <[EMAIL PROTECTED]> wrote:
>> Terren Suydam wrote:
>> >  ...
>> >  Without an internal
>> >  sense of meaning, symbols passed to the AI are simply
>> arbitrary data
>> >  to be manipulated. John Searle's Chinese Room
>> (see Wikipedia)
>> >  argument effectively shows why manipulation of
>> ungrounded symbols is
>> >  nothing but raw computation with no understanding of
>> the symbols in
>> >  question.
>>
>> Searle's Chinese Room argument is one of those things
>> that makes me
>> wonder if I'm living in the same (real or virtual)
>> reality as everyone
>> else. Everyone seems to take it very seriously, but to me,
>> it seems like
>> a transparently meaningless argument.
>>
>> It's equivalent to saying that understanding cannot be
>> decomposed; that
>> you don't get understanding (the external perspective)
>> without using
>> understanding (the person or computer inside the room). I
>> don't see any
>> reason why this should be true. How to do it is what AI
>> research is all
>> about.
>>
>> To look at it another way, it seems to me that the Chinese
>> Room is
>> exactly equivalent to saying "AI is impossible."
>> Until we actually get
>> AI working, I can't really disprove that statement, but
>> there's no
>> reason I should accept it either.
>>
>> Yet smarter people than I seem to take the Chinese Room
>> completely
>> seriously, so maybe I'm just not seeing it.
>>
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to