--- "Dr. Matthias Heger" <[EMAIL PROTECTED]> wrote:
> For example we had to learn to extract syllables from the sound wave
> of spoken language. Learning the grammar rules are in higher levels.
> Learning semantics is still higher and so on.

Actually that's only true in artificial languages.  Children learn
words with semantic content like "ball" and "milk" before they learn
function words like "the" and "of", in spite of their higher frequency.
 Techniques for parsing artificial languages fail for natural languages
because the parse depends on the meanings of the words, as in the
following example:

- I ate pizza with pepperoni.
- I ate pizza with a fork.
- I ate pizza with a friend.

> But it is a matter of fact that we use an O-O like model in the
> top-levels of our world. 
> You can see this also from language grammar. Subjects objects,
> predicates, adjectives have their counterparts in the O-O paradigm.

This is the false path of AI that so many have followed.  It seems so
obvious that high level knowledge has a compact representation like
Loves(John, Mary) that is easily represented on a 1960's era computer. 
We can just fill in the low level knowledge later.  This is completely
backwards from the way people learn.  The most spectacular failure is
Cyc's 20+ year effort to manually encode common sense knowledge and
their subsequent failure to attach a natural language interface.

The obvious hierarchical structure of knowledge has a neural
representation, as layers from simple to arbitrarily complex.  In
language, the structure is phonemes -> words -> semantics -> grammar. 
For vision (I am oversimplifying) it is pixels -> lines -> shapes ->
objects.  Learning is from simple to complex, training one layer at a
time.  Children learn phoneme recognition and basic visual perception
at a young age or not at all.

We should not expect that language modeling will be easier than vision.
The brain devotes similar volumes to each.  Long term memory tests by
Landauer [1] show similar learning rates for words and pictures, 2 bits
per second each.  We shy away from a neural approach because a
straightforward brain-sized neural network simulation would require
10^15 bits of memory and 10^16 operations per second.  We note long
term memory has 10^9 bits of complexity [1], so surely we can do
better. But so far we have not, nor have we any explanation why it
takes a million synapses to store one bit of information.

1. http://www.merkle.com/humanMemory.html



-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to