Josh,

This is an interesting idea that deserves detailed discussion.

Since the 90s there has been a strand in AI research that claims that
robotics is necessary to the enterprise, based on the notion that
having a body is necessary to intelligence. Symbols, it is said, must
be grounded in physical experience to have meaning. Without such
grounding AI practitioners are deceiving themselves by calling their
Lisp atoms things like MOTHER-IN-LAW when they really mean no more
than G2250.

I think these people correctly recognized a problem in traditional AI,
though they attributed it to a wrong cause.

My opinion on this issue can be summarized as the following:

*. Meaning come from experience, and is grounded in experience.

*. However, for AGI, this "experience" doesn't have to be "human experience".

*. Every implemented system already has a "body" --- the hardware, and
as long as the system has input and output, it has experience that
comes from its body. Of course, since the body is not human body, the
experience is not human experience. However, as far as this discussion
is concerned, it doesn't matter, since this kind of experience is
genuine experience that can be used to ground meaning of concepts.

*. The failure of traditional AI is not to use standard computer
hardware rather than special hardware (i.e., robot), but to ignore the
experience of the system when handling meaning of concepts.

A more detained discussion and a proposed solution can be found in
http://nars.wang.googlepages.com/wang.semantics.pdf

This has given rise to a plethora of silly little robots (in Minsky's
view, anyway) that scurry around the floor picking up coffeecups and
like activities.

I also think it is not a fruitful direction for AI to move.

My view lies somewhere between the extremes on this issue:

a) Meaning does not lie in a physical connection. I find meaning in
the concept of price-theoretical market equilibria; I've never seen,
felt, or smelled one. Meaning lies in working computational models,
and true meaning lies in ones that ones that can make correct
predictions.

As you can see from my comment and paper, I agree with your idea in
its basic spirit. However, I think your above presentation is too
vague, and far from enough for semantic analysis.

b) On the other hand, the following are true:

  1. Without some connection to external constraints, there is a
     strong temptation on the part of researchers to define away the
     hard parts of the AI problem. Even with the best will in the
     world, this happens subconsciously.

Agree.

  2. The hard part is learning: the AI has to build its own world
     model. My instinct and experience to date tell me that this is
     computationally expensive, involving search and the solution of
     tough optimization problems.

Agree, though I've been avoiding the phrase "world model", because the
intuitive picture it provides: there is a "objective world" out there,
and an AI is building an "internal model" of it, where the concepts
represent objects, and beliefs represent factual relations among
objects --- this is a picture you don't subscribe, I guess.

"That deaf, dumb, and blind kid sure plays a mean pinball."

Thus Tommy. My robotics project discards a major component of robotics
that is apparently dear to the embodiment crowd: Tommy is stationary
and not autonomous. This not only saves a lot of construction but
allows me to run the AI on the biggest system I can afford (currently
ten processors) rather than having to shoehorn code and data into
something run off a battery.

A good idea. As I said above: input/output is necessary for AGI, but
any concrete form of them is not, in principle. An AGI doesn't have to
be able to move itself around in the physical world (though it must
somehow change its environment), and doesn't have to have a certain
human sensor (though it must somehow sense its environment).

Tommy, the pinball wizard kid, was chosen as a name for the system
because of a compelling, to me anyway, parallel between a pinball game
and William James' famous description of a baby's world as a
"blooming, buzzing confusion." The pinball player is in the same
position as a baby in that he has a firehose input stream of sensation
from the lights and bells of the game, but can do little but wave his
arms and legs (flip the flippers), which very rarely has any effect at
all.

Makes sense.

Tommy, the robot, consists at the moment of a pair of Firewire cameras
and the ability to display messages on the screen and receive keyboard
input -- ironically almost the exact opposite of the rock opera Tommy.
Planned for the relatively near future is exactly one "muscle:" a
single flipper. Tommy's world will not be a full-fledged pinball game,
but simply a tilted table with the flipper at the bottom.

I'd suggest to add the "muscle" in as soon as possible to get a
complete sensor-motor cycle.

Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of these concepts is he given originally. Tommy
1.0 will simply watch the world and try to imagine what happens next.

I fully agree with your focus. I guess your "concepts" are patterns or
structures formed from certain "semantic primitives" by a fixed set of
operators or connectors. I'm very interested in your choice.

The scientific rationale for this is that visual and motor skills
arrive before verbal ones both in ontogeny and phylogeny. Thus I
assume they are more basic and the substrate on which the higher
cognitive abilities are based.

Though this kind of idea is shared by many people, I think it is weak
evidence for AGI design --- we don't have to follow the evolution
order in AGI design, even though it can be taken as a heuristic.

Furthermore I have a good idea what
concepts need to be formed for competence in this area, and so I'll
have a decent chance of being able to tell if the system is going in
the right direction.

To me, this is a more justifiable reason for the project than the
previous one. ;-)

I claim that most current AI experiments that try to mine meaning out
of experience are making an odd mistake: looking at sources that are
too rich, such as natural language text found on the Internet. The
reason is that text is already a highly compressed form of data; it
takes a very sophisticated system to produce or interpret. Watching a
ball roll around a blank tabletop and realizing that it always moves
in parabolas is the opposite: the input channel is very low-entropy
(in actual information compared to nominal bits), and thus there is
lots of elbow room for even poor, early, suboptimal interpretations to
get some traction.

I don't think you have convinced me that this kind of experiment is
better than the others (such as those in NLP) , but you get a good
idea and it is worth a try.

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to