Terren,
I agree that the emergence response shows where the flaw is in the
Chinese Room argument. The argument fails, because although
understanding is not in the person or the instructions or the physical
room as a whole, it emerges from the system. That being the case, how
can the argument show that ungrounded AI does not work?

I am not arguing for ungrounded AI, I just don't think the chinese
room argument is a good argument against it.

You say yourself that understanding could occur within the original
Chinese Room experiment:

"For instance, we may be simulating an agent that has senses and can
effect actions, and has some kind of dynamic memory and cognitive
architecture that allows it to process a kind of ongoing experience.
The Novamente design is one that could presumably lead to grounded
understanding [...]"

Therefore, the Chinese Room argument is not an example of symbolic AI
failing, by your own argument. Maybe it could be fixed or extended to
argue against symbolic AI. However, it does not do so by itself, and
in my opinion it would be clearer to come up with a different argument
rather than fixing that one.

-Abram Demski

On Wed, Aug 6, 2008 at 1:44 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:
>
> Hi Abram,
>
> Sorry, your message did slip through the cracks. I intended to respond 
> earlier... here goes.
>
> --- On Wed, 8/6/08, Abram Demski <[EMAIL PROTECTED]> wrote:
>> <[EMAIL PROTECTED]> wrote:
>> I explained somewhat in my first reply to this thread.
>> Basically, as I
>> understand you, you are saying that the original chinese
>> room does not
>> have understanding, but if we modify the argument to
>> connect it up to
>> a robot with adequate senses, it could have understanding
>> (if the
>> human inside could work fast enough to show it). But, if I
>> am willing
>> to grant that such a robot has understanding (despite the
>> human
>> controller having no understanding of the data being
>> manipulated),
>> then I may very well be willing to grant that the original
>> Chinese
>> room has understanding (as I am willing to grant).
>
> This is the crux of the emergence response to the Chinese Room. The question 
> boils down to: how is it possible for the robot to have understanding but the 
> processor not to?
>
> The insight necessary to see that this is possible, is that there are 
> multiple and utterly independent levels of description going on. On the local 
> level you have the processor, blindly following instructions, manipulating 
> data, and so on.  At the global level, a simulation is going on (whether it's 
> a physical or virtual robot). In other words, a simulated reality has 
> *emerged* as a consequence of executing a sophisticated program. The agent 
> being simulated and the virtual environment it is simulated in, are obeying a 
> set of rules that are totally orthogonal to the set of rules at the local 
> level.
>
> There is nothing in particular about what the processor is doing at that 
> local level of description that facilitates understanding at the global level 
> (not unlike studying the behavior of individual neurons to try and understand 
> the global 'mind' phenomenon). Understanding, rather, is a consequence of the 
> global-level description of the emergent entities and how they interact with 
> one another, and cannot be understood strictly in terms of the local level. 
> It's irreducible.
>
> For instance, we may be simulating an agent that has senses and can effect 
> actions, and has some kind of dynamic memory and cognitive architecture that 
> allows it to process a kind of ongoing experience. The Novamente design is 
> one that could presumably lead to grounded understanding (I assume, anyway, 
> based on BG's assertions that it's similar enough to OpenCog), because the 
> design enables (simulated) experience, and the ability to structure new 
> knowledge based ultimately on what it experiences (i.e. grounding). You can 
> argue that definition/mechanism of understanding, and grounding, and so on, 
> but that's a separate argument. They key point is that it is a mistake to in 
> any way attribute the global phenomenon of understanding by emergent agent to 
> the local processor that is just blindly executing instructions.
>
>> I do distrust some philosophy, but other issues I think are
>> very
>> important. For example, I am very interested in the
>> foundations of
>> mathematics.
>>
>> -Abram
>
> Skepticism of the content of philosophy is certainly justified, but 
> skepticism of the need for proficiency in it is not, if you're an AI 
> researcher.
>
> Terren
>
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to