On 5/3/07, DEREK ZAHN <[EMAIL PROTECTED]> wrote:

Ben Goertzel writes:

>[Ben's research uses] a virtual robot in a sim world rather than a
physical
>robot in the real world.

Does your software get as input a rendered (but still visual) view of the
sim world, or does it have access to higher-level information about the
simulation?  If the latter, I'm curious roughly what sort of
representation
you are using for the "sense" input.



We have 3 options we can use
-- object vision
-- polygon vision
-- voxel vision

We don't do "Screen scraping", i.e. all of the 3 options above involve the
system
directly identifying things (object, polygons or voxels) at a certain
distance and
direction.

In robotics, of course, to get this kind of info one needs either stereo
vision  +
some postprocessing, or camera+lidar input + some postprocessing.


I know there aren't that many AGI projects, but I wonder if any of them are
actually using real raw sensor input to an autonomous agent.


well there are plenty of robotics projects out there, e.g. John Weng's SAIL
project...

http://www.cse.msu.edu/~weng/

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to