In order to differentiate this from the rest of the robotics crowd you
need to avoid building a specialised pinball playing robot.  If the
machine can learn and form concepts based upon its experiences it
should be able to do so with any kind of game, provided that suitable
actuators are attached.  It is very easy to fall into the trap of
building something which is just a physical expert system.

From long experience of trying to do things like that I think there is
no getting around the fact that in order to be truly general you have
to build world models upon which reasoning systems can act, which
means getting into the tricky business of modelling sensors and
probabilistic interactions.  It is possible to take much simpler
Brooksian approaches, but in these cases what you always end up with
is a brittle expert system.  This might be ok if all you're trying to
do is model insect-like intelligence operating within some well
defined niche, but ideally we want our robots to be smarter than
cockroaches.




On 11/05/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
Since the 90s there has been a strand in AI research that claims that
robotics is necessary to the enterprise, based on the notion that
having a body is necessary to intelligence. Symbols, it is said, must
be grounded in physical experience to have meaning. Without such
grounding AI practitioners are deceiving themselves by calling their
Lisp atoms things like MOTHER-IN-LAW when they really mean no more
than G2250.

This has given rise to a plethora of silly little robots (in Minsky's
view, anyway) that scurry around the floor picking up coffeecups and
like activities.

My view lies somewhere between the extremes on this issue:

a) Meaning does not lie in a physical connection. I find meaning in
the concept of price-theoretical market equilibria; I've never seen,
felt, or smelled one. Meaning lies in working computational models,
and true meaning lies in ones that ones that can make correct
predictions.

b) On the other hand, the following are true:

  1. Without some connection to external constraints, there is a
     strong temptation on the part of researchers to define away the
     hard parts of the AI problem. Even with the best will in the
     world, this happens subconsciously.

  2. The hard part is learning: the AI has to build its own world
     model. My instinct and experience to date tell me that this is
     computationally expensive, involving search and the solution of
     tough optimization problems.


"That deaf, dumb, and blind kid sure plays a mean pinball."


Thus Tommy. My robotics project discards a major component of robotics
that is apparently dear to the embodiment crowd: Tommy is stationary
and not autonomous. This not only saves a lot of construction but
allows me to run the AI on the biggest system I can afford (currently
ten processors) rather than having to shoehorn code and data into
something run off a battery.

Tommy, the pinball wizard kid, was chosen as a name for the system
because of a compelling, to me anyway, parallel between a pinball game
and William James' famous description of a baby's world as a
"blooming, buzzing confusion." The pinball player is in the same
position as a baby in that he has a firehose input stream of sensation
from the lights and bells of the game, but can do little but wave his
arms and legs (flip the flippers), which very rarely has any effect at
all.

Tommy, the robot, consists at the moment of a pair of Firewire cameras
and the ability to display messages on the screen and receive keyboard
input -- ironically almost the exact opposite of the rock opera Tommy.
Planned for the relatively near future is exactly one "muscle:" a
single flipper. Tommy's world will not be a full-fledged pinball game,
but simply a tilted table with the flipper at the bottom.


Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of these concepts is he given originally. Tommy
1.0 will simply watch the world and try to imagine what happens next.

The scientific rationale for this is that visual and motor skills
arrive before verbal ones both in ontogeny and phylogeny. Thus I
assume they are more basic and the substrate on which the higher
cognitive abilities are based.  Furthermore I have a good idea what
concepts need to be formed for competence in this area, and so I'll
have a decent chance of being able to tell if the system is going in
the right direction.

I claim that most current AI experiments that try to mine meaning out
of experience are making an odd mistake: looking at sources that are
too rich, such as natural language text found on the Internet. The
reason is that text is already a highly compressed form of data; it
takes a very sophisticated system to produce or interpret. Watching a
ball roll around a blank tabletop and realizing that it always moves
in parabolas is the opposite: the input channel is very low-entropy
(in actual information compared to nominal bits), and thus there is
lots of elbow room for even poor, early, suboptimal interpretations to
get some traction.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to