Matt Mahoney [mailto:[EMAIL PROTECTED] wrote

>>>>>>>>>>>>>>>
Object oriented programming is good for organizing software but I don't
think for organizing human knowledge.  It is a very rough
approximation.  We have used O-O for designing ontologies and expert
systems (IS-A links, etc), but this approach does not scale well and
does not allow for incremental learning from examples.  It totally does
not work for language modeling, which is the first problem that AI must
solve.
<<<<<<<<<<<<<<<<<

I agree that the O-O paradigm is not adequate to model all learning
algorithms and models we use. My own example of recognizing voices should
show that I have doubts that we use O-O models in our brain for everything
of our environment.

I think our brain learns a somewhat a hierarchical model of the world. And
the algorithm for the low level (e.g. voices, sounds) are probably complete
different from the algorithms for higher levels of our models. It is evident
that a child has learning capabilities that are far beyond those from an
adult. 
The reason is not only that the child's brain is nearly empty.
The physiological architecture is different to some degree. So we can expect
that learning the basic low levels of a world model requires algorithms
which we only have had as a child.
And the result of that learning is to some degree used for bias in later
learning algorithm when we are adult.

For example we had to learn to extract syllables from the sound wave of
spoken language. Learning the grammar rules are in higher levels. Learning
semantics is still higher and so on.

But it is a matter of fact that we use an O-O like model in the top-levels
of our world. 
You can see this also from language grammar. Subjects objects, predicates,
adjectives have their counterparts in the O-O paradigm.

A photo of a certain scene is physically an array of colored pixels. But you
can ask a human what he sees. And a possible answer could be:
Well, there is a house. A man walks to the door. It wears a blue shirt. A
woman looks through the window ...

Obviously, the answer shows a lot how people model the world in their
top-level (= conscious)
And obviously the model consists of interacting objects with attributes and
behavior.  
So knowledge representation at higher levels is indeed O-O like.

I think your and my answer show that we do not use a single algorithm which
is responsible to extract all the regularities from our perceptions.

And more important: There is physiological and psychological evidence that
the algorithms we use change to some degree during the first decade of our
life.



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to