Yes, thank you, a meaningful and very interesting project. I discussed this
kind of system with a friend of mine half an hour ago.

On 5/11/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:


  2. The hard part is learning: the AI has to build its own world
     model. My instinct and experience to date tell me that this is
     computationally expensive, involving search and the solution of
     tough optimization problems.


This must be the central part of your project. I'm very interested in how
you approach the following problems:
- Concept and pattern representation. If you use some sort of graphical
model, what types of edges, nodes, relations? Something like Ben's SMEPH?
- Concept creation. Do you have single method in mind or multiple methods,
maybe working simultaneously? Data mining methods, statistical methods,
genetic programming, NN's (e.g. Boltzmann machines), ...?
- Concept revision/optimization. You mention you use search techniques,
could you be a little more specific (or references). Will there be something
like a wake/sleep cycle, or is optimization done in real-time?

Also, why did you choose a physical implementation and not a virtual one?
Simply because it's more interesting or are there other motives?

These kind of project are, of course, very complex and multi-faceted, but
worth is because they force you to think about these extremely essential
things like model creation, concept formation, model optimization.

(BTW I ordered your new book "Beyond AI" this week, and looking forward to
reading it.).

Please keep us updated on your project.

Kind regards,
Durk Kingma




"That deaf, dumb, and blind kid sure plays a mean pinball."


Thus Tommy. My robotics project discards a major component of robotics
that is apparently dear to the embodiment crowd: Tommy is stationary
and not autonomous. This not only saves a lot of construction but
allows me to run the AI on the biggest system I can afford (currently
ten processors) rather than having to shoehorn code and data into
something run off a battery.

Tommy, the pinball wizard kid, was chosen as a name for the system
because of a compelling, to me anyway, parallel between a pinball game
and William James' famous description of a baby's world as a
"blooming, buzzing confusion." The pinball player is in the same
position as a baby in that he has a firehose input stream of sensation
from the lights and bells of the game, but can do little but wave his
arms and legs (flip the flippers), which very rarely has any effect at
all.

Tommy, the robot, consists at the moment of a pair of Firewire cameras
and the ability to display messages on the screen and receive keyboard
input -- ironically almost the exact opposite of the rock opera Tommy.
Planned for the relatively near future is exactly one "muscle:" a
single flipper. Tommy's world will not be a full-fledged pinball game,
but simply a tilted table with the flipper at the bottom.


Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of these concepts is he given originally. Tommy
1.0 will simply watch the world and try to imagine what happens next.

The scientific rationale for this is that visual and motor skills
arrive before verbal ones both in ontogeny and phylogeny. Thus I
assume they are more basic and the substrate on which the higher
cognitive abilities are based.  Furthermore I have a good idea what
concepts need to be formed for competence in this area, and so I'll
have a decent chance of being able to tell if the system is going in
the right direction.

I claim that most current AI experiments that try to mine meaning out
of experience are making an odd mistake: looking at sources that are
too rich, such as natural language text found on the Internet. The
reason is that text is already a highly compressed form of data; it
takes a very sophisticated system to produce or interpret. Watching a
ball roll around a blank tabletop and realizing that it always moves
in parabolas is the opposite: the input channel is very low-entropy
(in actual information compared to nominal bits), and thus there is
lots of elbow room for even poor, early, suboptimal interpretations to
get some traction.

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to