Hi Abram,

Sorry, your message did slip through the cracks. I intended to respond 
earlier... here goes.

--- On Wed, 8/6/08, Abram Demski <[EMAIL PROTECTED]> wrote:
> <[EMAIL PROTECTED]> wrote:
> I explained somewhat in my first reply to this thread.
> Basically, as I
> understand you, you are saying that the original chinese
> room does not
> have understanding, but if we modify the argument to
> connect it up to
> a robot with adequate senses, it could have understanding
> (if the
> human inside could work fast enough to show it). But, if I
> am willing
> to grant that such a robot has understanding (despite the
> human
> controller having no understanding of the data being
> manipulated),
> then I may very well be willing to grant that the original
> Chinese
> room has understanding (as I am willing to grant).

This is the crux of the emergence response to the Chinese Room. The question 
boils down to: how is it possible for the robot to have understanding but the 
processor not to?

The insight necessary to see that this is possible, is that there are multiple 
and utterly independent levels of description going on. On the local level you 
have the processor, blindly following instructions, manipulating data, and so 
on.  At the global level, a simulation is going on (whether it's a physical or 
virtual robot). In other words, a simulated reality has *emerged* as a 
consequence of executing a sophisticated program. The agent being simulated and 
the virtual environment it is simulated in, are obeying a set of rules that are 
totally orthogonal to the set of rules at the local level. 

There is nothing in particular about what the processor is doing at that local 
level of description that facilitates understanding at the global level (not 
unlike studying the behavior of individual neurons to try and understand the 
global 'mind' phenomenon). Understanding, rather, is a consequence of the 
global-level description of the emergent entities and how they interact with 
one another, and cannot be understood strictly in terms of the local level. 
It's irreducible.

For instance, we may be simulating an agent that has senses and can effect 
actions, and has some kind of dynamic memory and cognitive architecture that 
allows it to process a kind of ongoing experience. The Novamente design is one 
that could presumably lead to grounded understanding (I assume, anyway, based 
on BG's assertions that it's similar enough to OpenCog), because the design 
enables (simulated) experience, and the ability to structure new knowledge 
based ultimately on what it experiences (i.e. grounding). You can argue that 
definition/mechanism of understanding, and grounding, and so on, but that's a 
separate argument. They key point is that it is a mistake to in any way 
attribute the global phenomenon of understanding by emergent agent to the local 
processor that is just blindly executing instructions. 

> I do distrust some philosophy, but other issues I think are
> very
> important. For example, I am very interested in the
> foundations of
> mathematics.
> 
> -Abram
 
Skepticism of the content of philosophy is certainly justified, but skepticism 
of the need for proficiency in it is not, if you're an AI researcher.

Terren


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to