Thanks Aaron.

It’s good to know that people are thinking about these things – but I doubt 
that they will achieve much – motion understanding is so dependent on having a 
body to simulate the motion and on the complexity of your world model/knowledge 
– and how many different kinds of bodies and body motions you are familiar 
with. I was going to then say : “better to start with a robot near the level of 
a worm, than a humanoid robot/mind” – but even a worm probably has a relatively 
very complex world model.

Symbols – or some kind of crude tags to begin with  – are useful, but it’s the 
images and image processing that are primary and fundamental for real world 
intelligence – and we are still technologically extremely ignorant about all 
that.

If you think about it, it’s quite mad to believe that you can be intelligent 
about the world by simply playing around with logical symbols, when you can’t 
even understand the simplest of real world scenes. 

From: Aaron Hosford 
Sent: Tuesday, October 30, 2012 2:30 PM
To: AGI 
Subject: [agi] Simulation for Perception, Symbols for Understanding

http://www.rec.ri.cmu.edu/about/news/11_01_minds.php

  Recognizing and predicting human activity in video footage is a difficult 
problem.  People do not all perform the same action in the same way.   
Different actions may look very similar on video.  And videos of the same 
action can vary wildly in appearance due to lighting, perspective, background, 
the individuals involved, and more.  

  To minimize the effects of these variations, Carnegie Mellon's Mind’s Eye 
software will generate 3D models of the human activities and match these models 
to the person’s motion in the video.  It will compare the video motion to 
actions it’s already been trained to recognize (such as walk, jump, and stand) 
and identify patterns of actions (such as pick up and carry).  The software 
examines these patterns to infer what the person in the video is doing.  It 
also makes predictions about what is likely to happen next and can guess at 
activities that might be obscured or occur off-camera.  

This project's approach is to use 3D simulation to detect and classify 
behavior, and then generate symbolic information about the events that were 
observed. I'm encouraged to see someone doing work on this stage of cognition, 
as I see perception as the "missing link" that's stopping AGI from developing.

I wonder, will a certain naysayer feel vindicated that someone else sees 
simulation as vital to intelligence (and is using it to solve precisely the 
problems he says it's needed to solve), or will he be annoyed that the ultimate 
form the information takes is symbolic, which is compatible with semantic nets 
or any number of other existing AGI approaches?


      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to