>>>>>>>>>> Matt Mahoney [mailto:[EMAIL PROTECTED] wrote

> eat(Food f)
> eat(Food f, List<SideDish> l)
> eat (Food f, List<Tool> l)
> eat (Food f, List<People> l)
> ...

This type of knowledge representation has been tried and it leads to a
morass of rules and no intuition on how children learn grammar.  We do
not know how many grammar rules there are, but it probably exceeds the
number of words in our vocabulary, given how long it takes to learn.

<<<<<<<<<<<<<<<

As I said, my intention is not to find a set of O-O like rules to create
AGI.
The fact that early approaches failed to build AGI by a set of similar rules
does not prove, that AGI cannot consist of such rules.

For example, there were also approaches to create AI by biological inspired
neural networks with some minor success but there was not the real
breakthrough too.

So this does not prove anything but that the problem of AGI is not so easy
to solve.

The brain is still a black box regarding many phenomenon.

We can analyze our own conscious thoughts and our communication which is
nothing else than sending ideas and thoughts from one brain to the other
brain via natural language.

I am convinced, that the structure and contents of our language is not
independent of the internal representation of knowledge.

And from language we must conclude that there are O-O like models in the
brain because the semantics is O-O.

There might be millions of classes and relationships.
And surely every day or night, the brain refactores parts of its model.

The roadmap to AGI will probably be top-down and not bottom-up.
The bottom-up approach is used by biological evolution.

Creating AGI by software engineering means that we first must know where we
want to go and then how to go there.

Human language and conscious thoughts suggests that AGI must be able to
represent the world O-O like at the top-level.
So this ability is the answer for the question where we want to go.

Again, this does not mean that we must find all the classes and objects. But
we must find an algorithm that generates O-O like models of its environment
based on its perceptions and some bias where the need for the bias can be
proven from reasons of performance.

We can expect that the top-level architecture of AGI is the easiest part in
an AGI project, because the contents of our own consciousness gives us some
hints (but not all) how our own world representation works at the top-level.
And this is O-O in my opinion. There is also a  phenomenon of associations
between patterns (classes). But this is just a question of retrieving
information and attention to relevant parts of the O-O model and is no
contradiction to the existence of the O-O paradigm.

When we go to lower levels, it is clear that difficulties arise.
The reason is that we have no possibility for conscious introspection of the
low levels in our brain. Science gives us hints mainly for the lowest levels
(chemistry, physics...).

So the medium layers of AGI will be the most difficult layers.
By the way this is also often the case in normal software.
In the medium layers there will be base functionalities and the framework
for the top-level. 





-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to