>>>>>>>>Matt Mahoney [mailto:[EMAIL PROTECTED] 

Repeat the trial many times.  Out of the thousands of perceptual
features present when the child hears "ball", the relevant features
will reinforce and the others will cancel out.

The concept of "ball" that a child learns is far too complex to
manually code into a structured knowledge base.  An orange is round but
not a ball.  An American football is not round.  Knowing that a ball is
a sphere does not help an AI viewing a small video of a tennis or
badminton match know that the single yellow pixel moving across the
image is a ball but the white pixel is not.

<<<<<<<<<<<<

I agree. But it is difficult to believe that the relevant features simply
reinforce out of the thousands by seeing  a ball several times. Trials with
artificial neural networks could learn some patterns but failed to get the
idea of complex objects.

>>>>>>>>>>-- Matt Mahoney wrote
The retina uses low level feature detection of spots, edges, and
movement to compress 137 million pixels down to 1 million optic nerve
fibers.  By the time it gets through the more complex feature detectors
of the visual cortex and into long term memory, it has been compressed
down to 2 bits per second.
<<<<<<<<<<<<<<

If we know how this compression works we have solved one of the main
problems of AGI.
This process is partially learned and here we must find out what we learn
and what is coded from the first day.
I could imagine that a baby brain gets less bits/ seconds and learns basic
patterns during the first weeks. Then, the eyes and the retina changes.
Using the learned patterns the brain can later handle the huge amount of
bits from the eyes.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to