http://www.rec.ri.cmu.edu/about/news/11_01_minds.php


>  Recognizing and predicting human activity in video footage is a
> difficult problem.  People do not all perform the same action in the same
> way.   Different actions may look very similar on video.  And videos of the
> same action can vary wildly in appearance due to lighting, perspective,
> background, the individuals involved, and more.
>
> To minimize the effects of these variations, Carnegie Mellon's Mind’s Eye
> software will generate 3D models of the human activities and match these
> models to the person’s motion in the video.  It will compare the video
> motion to actions it’s already been trained to recognize (such as walk,
> jump, and stand) and identify patterns of actions (such as pick up and
> carry).  The software examines these patterns to infer what the person in
> the video is doing.  It also makes predictions about what is likely to
> happen next and can guess at activities that might be obscured or occur
> off-camera.
>
This project's approach is to use 3D simulation to detect and classify
behavior, and then generate symbolic information about the events that were
observed. I'm encouraged to see someone doing work on this stage of
cognition, as I see perception as the "missing link" that's stopping AGI
from developing.

I wonder, will a certain naysayer feel vindicated that someone else sees
simulation as vital to intelligence (and is using it to solve precisely the
problems he says it's needed to solve), or will he be annoyed that the
ultimate form the information takes is symbolic, which is compatible with
semantic nets or any number of other existing AGI approaches?



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to