Video Analysis covering several functions including recognition of objects.
Seeing an object and identifying it immediately with an object in memory.
See a tree and identify it as a tree and then perhaps a pine tree and then
even the species depending on how much visual data is available; go right
down the zoological rank.  If an object is unknown collect the
characteristics, study the behaviors.  But thinking how a human eye coupled
with brain visual processing works is fantastic, there is instant
recognition and as soon as recognition occurs there is immediate link to
knowledge on the object and predictions and actions can proceed right away.
Computers will eventually do this better but the human eye/brain
coordination is mindboggling especially with the variety of objects
recognizable.  But in certain domains computers are very efficient say for
example missile tracking systems.

But the agglomeration of video files on the internet presents a goldmine of
information.  Millions of video cameras running simultaneously around the
world recording data where a lot of it is tappable, stored on free
distributed servers or beamed out in streams.  It is just raw unprocessed
data perfect for software to start analyzing.  There isn't a feedback loop
when staring at video but within the video there is cause/effect that can be
studied.  The AGI needn't be involved for it to assemble pseudo experiential
knowledge.  Within the AGI mind a simulated physics system in memory can be
used to model representations of objects in videos and use this to derive
information and make predictions of dynamics and to flush out the
off-screen.  This system could be used in the controlling of a robot as well
if the robots physics engineering is included.  Once objects are identified
size/depth spatial functions can be created for positioning and movement
calculations but yes grounding on the real physical world is definitely
needed to train and tune the system.

John


From: a [mailto:[EMAIL PROTECTED]
> 
> I doubt "video analysis" it will be AGI. What kinds of video should we
> "analyze"? But is "analysis" going to turn out to AGI? The
> implementation I think must be holistic.  What does "video analysis"
> mean? Is it just extracting the direction of motion or orientation? The
> machine must learn and adapt to video. We cannot just detect the
> direction of motion nor changes in orientation, but the machine have to
> respond accordingly to the changes in the video from knowledge it
> learned.
> 
> Even with binocular video training, I don't think the machine will
> "understand" how far away an object is, the orientation, etc, without
> putting the robot in an interactive simulation environment.
> 
> It should train and interact to learn by itself instead of hacks. The
> depth and motion of the video should be learned and trained so it will
> "understand" what depth and motion is. The only way for the robot to
> train depth and motion by itself is to put the robot in an interactive
> simulation environment. When the robot move in the simulation, some
> objects surrounding will move, some will change in size, orientation,
> etc. It therefore will understand why some objects change. For example,
> whether the object itself was moving or the robot was moving; how size
> relates to closeness; etc. I think that's what humans do. People whose
> vision has been restored after being blind from birth have trouble
> seeing depth.
> 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=28573703-5c51ca

Reply via email to