Eric Baum wrote:
-- Assume there will be persistent objects in the 3D space
This is not innate. Babies don't recognize that when an object is
hidden from view that it still exists.
Ben> I'm extremely familiar with the literature on object permanence;
Ben> and the truth seems to be that babies **do** have some innate
Ben> predisposition that makes it easier for them to learn object
Ben> permanence, even though they also do need to learn it.
Ben> However, in a relatively stripped-down perceptual environment,
Ben> Novamente learns object permanence
What precisely does this mean?
Actually we have not done this experiment yet, but we have done similar
ones and
thought about this one in detail... and I'm confident it would go as
planned...
Consider Novamente as embodied in a simulated humanoid in a 3D world.
Put it in a room
full of various objects: couches, chairs, polygons, etc. (This is the
AGISim apartment that
we have used for a few learning experiments already.)
Then, we can do a couple object permanence related experiments
1) We can do the Piagetan A-not-B experiment, where iteratively a toy is
hidden in one of two boxes
A and B, and then retrieved. NM has got to learn that the toy is going
to be found in the box where it
was hidden
2) We are currently playing fetch with NM. So, suppose we play fetch
with it, and reward it more for more quickly
getting the ball that the teacher throws ...
Suppose Novamente is standing in front of the couch, and the teacher is
standing to the left of the couch.
Suppose the teacher says "fetch" (a command NM knows) and then rolls the
ball behind the couch, fast.
Can Novamente figure out that the ball, if it's rolled fast enough, is
going to come out from behind the object from the
right side ... so that, to get the ball fastest, it should run to the
right side of the couch and not the left?
Can it then generalize this knowledge to other scenarios, so that when
it sees a ball roll behind some other object,
it doesn't have to learn the lesson all over again?
This kind of stuff is well within the cognitive capacity of the current
NM system, and probably a lot of other
existing AI systems as well -- I'm not saying it's a particularly grand
achievement! We took the time to write out
the particular inferences NM would need to make to do this stuff,
although we didn't yet take time to write a test
script to run the experiments in the sim world (or run them with a human
teacher).
The point is that learning that objects are permanent (in the Piagetan
sense) turns out not to be a terribly difficult
uncertain inference problem, as we found by formulating it as such in a
Novamente context.
Note that the above experiments assume that "object recognition" has
already been solved. In NM's current situation,
object recognition is not very hard because it lives in a sim world that
does not have much perceptual data compared
to the real world. A human baby must learn object recognition and
object permanence all wrapped up together, which is
perhaps harder.
-- Ben
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303