Terren,
I agree. Searle's responses are inadequate, and the whole thought
experiment fails to prove his point. I think it also fails to prove
your point, for the same reason.

--Abram

On 8/5/08, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Neither of those objections apply to what I'm saying. I agree that the
> system as a whole does not have understanding, but that's not what I'm
> asserting. I making the specific claim that an emergent aspect of the system
> is the agent of understanding. The brain simulator defense suffers the same
> problem, that Searle does not adequately address the role of emergence.
>
> Terren
>
> --- On Tue, 8/5/08, Abram Demski <[EMAIL PROTECTED]> wrote:
>
>> From: Abram Demski <[EMAIL PROTECTED]>
>> Subject: Re: [agi] Groundless reasoning --> Chinese Room
>> To: [email protected]
>> Date: Tuesday, August 5, 2008, 6:07 PM
>> Terren,
>>
>> You and I could agree. But the Chinese Room, as a thought
>> experiment,
>> is supposed to refute that.
>>
>> The reply you are giving is very similar to the
>> "Systems Reply":
>>
>> http://plato.stanford.edu/entries/chinese-room/#4.1
>>
>> "Searle's response to the Systems Reply is simple:
>> in principle, the
>> man can internalize the entire system, memorizing all the
>> instructions, doing all the calculations in his head. He
>> could then
>> leave the room and wander outdoors, perhaps even conversing
>> in
>> Chinese. But he still would have no way to attach "any
>> meaning to the
>> formal symbols". The man would now be the entire
>> system, yet he still
>> would not understand Chinese. For example, he would not
>> know the
>> meaning of the Chinese word for hamburger. He still cannot
>> get
>> semantics from syntax. (See below the section on Syntax and
>> Semantics)."
>>
>> http://plato.stanford.edu/entries/chinese-room/#4.3 could
>> also be
>> taken to apply to your response, but I won't quote that
>> one.
>>
>> Sincerely,
>> Abram Demski
>>
>>
>> On Tue, Aug 5, 2008 at 1:50 PM, Terren Suydam
>> <[EMAIL PROTECTED]> wrote:
>> >
>> > The Chinese Room argument counters only the assertion
>> that the computational mechanism that manipulates symbols is
>> capable of understanding. But in more sophisticated
>> approaches to AGI, the computational mechanism is not the
>> agent, it's merely a platform.
>> >
>> > Take the OpenCog design. See in particular:
>> >
>> >
>> http://www.opencog.org/wiki/OpenCogPrime:EmergenceOverview
>> >
>> > The 'phenomenal self' emerges as a consequence
>> of interaction with the environment and the continuous
>> search for explanations of behavior (including its own). It
>> is this emergent self that is the agent of understanding.
>> The computational framework that facilitates the interaction
>> is merely a platform. Nobody would say that the computer
>> manipulating the symbols has understanding (in accordance
>> with the Chinese Room), but it is possible that someday
>> we'd agree that OpenCog the *agent* has achieved it.
>> >
>> > Terren
>> >
>> > --- On Tue, 8/5/08, Abram Demski
>> <[EMAIL PROTECTED]> wrote:
>> >> The original argument was put forward to show that
>> all AI
>> >> is
>> >> impossible, not just symbolic AI. I can see why
>> someone
>> >> might take it
>> >> the other way, but I don't think it works; the
>> >> complicated instruction
>> >> books inside the room could implement either a
>> symbolic AI
>> >> or a
>> >> nonsymbolic one. The details would need to be
>> changed;
>> >> perhaps you
>> >> want video input rather than chinese input, and
>> >> robot-control as
>> >> output. And we'd need some silly trick like a
>> timewarp
>> >> to make the
>> >> robot move in real time. But I think Searle would
>> still
>> >> stick to his
>> >> guns and say that the person in the room does not
>> >> comprehend the data
>> >> he is manipulating (the 1s and 0s from the video
>> feed),
>> >> therefore no
>> >> understanding occurs.
>> >>
>> >> So, Terren, in my opinion you should drop the
>> chinese room
>> >> in
>> >> connection with your argument. It simply has too
>> much
>> >> historical
>> >> baggage.
>> >>
>> >> -Abram
>> >>
>> >>
>> >> -------------------------------------------
>> >> agi
>> >> Archives:
>> https://www.listbox.com/member/archive/303/=now
>> >> RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/
>> >> Modify Your Subscription:
>> >> https://www.listbox.com/member/?&;
>> >> Powered by Listbox: http://www.listbox.com
>> >
>> >
>> >
>> >
>> >
>> > -------------------------------------------
>> > agi
>> > Archives:
>> https://www.listbox.com/member/archive/303/=now
>> > RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/
>> > Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>> >
>>
>>
>> -------------------------------------------
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to