On 24/04/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote:

Mining plus matching, analogy, and interpolation/extrapolation. The key to
making it work is to form the abstractions that allow the robot/AI to
interpret the actions as "grasp broom; lower until it touches floor"
instead
of "move hand to (x,y,z); close hand; lower hand 6.3 cm".


The level at which the data mining takes place is an important issue.  Some
mining might be done close to the surface - that is, close to the level of
direct sensing.  For example when aligning with a waste basket simple visual
line detection might be sufficient.  Other things may only make sense at a
deeper level, such as creating a 3D grid map and then being able to discover
relationships between the map and the robots current state.



A really serious step toward AGI would be to program a robot so that it
could
watch a human doing a task and then do the task. Back in the day, as a
grad
student in AI in the 70's, I worked on a system to do act interpretation,
i.e. "watch" (be fed a sequence of predicate expressions describing) a
series
of actions by someone and try to figure out what their goal was and the
teleological structure of their activity. The inverse of planning, as it
were. The system was frankly junk and many of my notions of the right way
to
build an AI are informed by its shortcomings -- but we were working on
(one
of) the right problem(s).


To be capable of doing this the robot would need a mirror system, mapping
its own actions to the typical actions of a human then bootstrapping its own
internal planning system by superimposing it onto other animate entities.
This is quite a sophisticated and powerful skill, which allows the system to
develop a sense of self versus others which is useful when living in a
social group.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to