Re: [agi] Educating an AI in a simulated world

2003-07-12 Thread Philip Sutton
Ben,

I think there's a prior question to a Novamente learning how to 
perceive/act through an agent in a simulated world.

I think the first issue is for Novamente to discover that, as an intrinsic 
part of its nature, it can interact with the world via more than one agent 
interface.

Biological intelligences are born with one body and a predetermined set 
of sensors and actuators.  Later humans learn that we can extend our 
powers via technologically added sensors and actuators.

But an AGI is a much more plastic beast at the outset - it can be 
hooked to any number of sensor/actuator sets/combinations and these 
can be in the real world or in virtual reality.

My guess is that it might be useful for an AGI to learn from the outset 
that it needs to make conscious choices about which sensor/actuator 
set to use when trying to interact with the world 'out there'.

Probably to reduce early learning confusion it might be useful initially to 
give the AGI only 2 choices - between an agent that is a fixed-location 
box and an agent that is mobile - but with similar sensor sets so that it 
can fairly quickly learn that there is a relationship between what it 
perceives/learns via each sensor/actuator set.  (Biligual children often 
learn to speak quite a bit later than monolingual children - the young 
AGI doesn't want to have early leaning hurdles set too high.)

What I've said above I guess only matters if you are going to let a 
Novamente persist for a long period of time ie. you don't just reset it to 
the factory settings every time you run a learning session.  If the 
Novamente persists as an entity for any length of time then its early 
learning is going to play a huge role in shaping its capabilities and 
personality.

On a different matter, I think that it would be good for the AGI to learn 
to live initially in a world that is governed by the laws of physics, 
chemistry, ecology, etc.  So, although the best initial learning 
environment might be virtual world (mainly to reduce the need for 
massive sensory processing power), I think that world should simulate 
the bounded/non magical nature of the real world we live in.

Even if an AGI chooses to live in a non-bounded/magical virtual world 
most of the time in later life it needs to know that its fundamental 
existence is tied to a real world - it's going to need non-magical 
computational power and that's going to need real physical energy and 
it dependence on the real world has consequences for the other entities 
that live in the real world ie you and me an a few billion other people 
and some tens of million of other forms of life.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Educating an AI in a simulated world

2003-07-12 Thread Ben Goertzel


It's an interesting idea, to raise Novababy knowing that it can adopt
different bodies at will.  Clearly this will lead to a rather different
psychology than we see among humans --- making the in-advance design of
educational environments particularly tricky!!

On the other hand, creating a virtual world that obeys the laws of physics,
chemistry and so forth is simply not very plausible given the current state
of physics, chemistry etc.  Of course, we can make better or worse
approximations, though.  And, just as with the multiple-bodies idea, I think
a multiple-virtual-worlds plan makes sense.  The AGI will then grow up
without identification to a specific body, and also without identification
to a particular type of world!

more later,
Ben



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Philip Sutton
Sent: Saturday, July 12, 2003 9:56 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Educating an AI in a simulated world


Ben,

I think there's a prior question to a Novamente learning how to
perceive/act through an agent in a simulated world.

I think the first issue is for Novamente to discover that, as an intrinsic
part of its nature, it can interact with the world via more than one agent
interface.

Biological intelligences are born with one body and a predetermined set
of sensors and actuators.  Later humans learn that we can extend our
powers via technologically added sensors and actuators.

But an AGI is a much more plastic beast at the outset - it can be
hooked to any number of sensor/actuator sets/combinations and these
can be in the real world or in virtual reality.

My guess is that it might be useful for an AGI to learn from the outset
that it needs to make conscious choices about which sensor/actuator
set to use when trying to interact with the world 'out there'.

Probably to reduce early learning confusion it might be useful initially to
give the AGI only 2 choices - between an agent that is a fixed-location
box and an agent that is mobile - but with similar sensor sets so that it
can fairly quickly learn that there is a relationship between what it
perceives/learns via each sensor/actuator set.  (Biligual children often
learn to speak quite a bit later than monolingual children - the young
AGI doesn't want to have early leaning hurdles set too high.)

What I've said above I guess only matters if you are going to let a
Novamente persist for a long period of time ie. you don't just reset it to
the factory settings every time you run a learning session.  If the
Novamente persists as an entity for any length of time then its early
learning is going to play a huge role in shaping its capabilities and
personality.

On a different matter, I think that it would be good for the AGI to learn
to live initially in a world that is governed by the laws of physics,
chemistry, ecology, etc.  So, although the best initial learning
environment might be virtual world (mainly to reduce the need for
massive sensory processing power), I think that world should simulate
the bounded/non magical nature of the real world we live in.

Even if an AGI chooses to live in a non-bounded/magical virtual world
most of the time in later life it needs to know that its fundamental
existence is tied to a real world - it's going to need non-magical
computational power and that's going to need real physical energy and
it dependence on the real world has consequences for the other entities
that live in the real world ie you and me an a few billion other people
and some tens of million of other forms of life.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]