Isn't one of the key concepts behind hierarchical memory (as described in
Jeff Hawkins's work, the Serre paper I have cited, Rodney Books's
subsumption, etc.) exactly that is builds hierarchically upon the
regularities and modularities of whatever word it is learning in, acting
in, and representing?
Ed Porter

-----Original Message-----
From: Robert Wensman [mailto:[EMAIL PROTECTED]
Sent: Saturday, October 20, 2007 7:39 AM
To: agi@v2.listbox.com
Subject: Re: [agi] evolution-like systems



I am not exactly sure how GA(genetic algorithm) and GP (genetic
programming) is defined. It seems that the concept of gene and evolution
are very much interconnected, so how we define genetic algorithm and
genetic programming depends on how we define evolutionary learning, which
is partially a topic of this thread.

But to avoid this question and still answer your question, I could say
that GA and GP needs to be pretty advanced to quell the combinatorial
explosion that you speak of if it is to be used in AGI. Starting a naive
evolutionary process without trying to speed things up would be pointless.
It could take zillion of years to get anywhere, basically repeating the
biological evolution in a computer.

I believe Ben Goetzel and Novamente have some interesting points in this
topic. His basic point is that a lot of necessary AGI algorithms needs to
be exponential and prone to combinatorial explosions, so the key issue is
to interconnect a lot of different systems so they help each other to
overcome their inherent drawbacks. In their case they use a system of
evolutionary learning combined with probabilistic reasoning.

I have thought about another way to do the same thing, although my ideas
are far from thought out. My idea is that evolutionary learning to build a
world model needs to utilize the modularity of reality somehow to
factorize the adaptive process.

If an adaptive process is factorized, it could drastically decrease the
time necessary to perform it. This is a universal phenomenon and true
regardless of adaptive/learning/evolutionary algorithm. For example, the
most simple adaptive process just creates a model at random, and tests
whether it is correct. If a model is described using 32 bits, then the
time for adaption would be in the order of 2^32. But if the model can be
divided into two independent parts, the order of adaptation is only
2^16+2^16 = 2^17.

Fortunately in our world, objects are somewhat independent of each other.
I can rest decently assured that inner state and mechanics of my toaster,
does not interfere with the inner state and mechanics of my microwave
oven. This means I could hypothetically apply evolutionary learning on my
toaster and microwave oven separately, and factorize the learning process
in that way.

In addition, in our world there seems to be classes of objects of similar
design and function. If I understand the basics of a tree for example, I
can apply this model to many more trees in a forrest. These regularities
is also something that could be utilized to speed up adaptation, but maybe
in a different way.

So basically yes, making evolutionary learning work fast enough is what
AGI is all about. But I do not feel that these methods to try to speed
things up make it less of an evolution, at least not in my opinion. The
reason I like the concept of evolutionary learning is that it implies some
form of open endedness, similar to how we think the thoughts of an
intellect can go in any direction. The words learning and adaptation has
been too much used in narrow AI in over simplified contexts.

I would like to direct a question to Ben Goetzel if he happen to read
this. I am a fan of Novamente, and their ideas of quelling combinatorial
explosions. But I wonder if they ever thought along the lines presented
here, trying to factorize adaptation by using the modularity of reality.

/Robert W

2007/10/20, William Pearson <[EMAIL PROTECTED]>:

On 20/10/2007, Robert Wensman <[EMAIL PROTECTED]> wrote:
> It seems your question stated on the meta discussion level, since that
you
> ask for a reason why a there are two different beliefs.
>
> I can only answer for myself, but to me some form of evolutionary
learning
> is essential to AGI. Actually, I define intelligence to be "an
Eco-system of
> ideas that compete for survival". The fitness of such ideas are
determined
> through three aspects:

The trouble with the word evolution, is that it brings to mind
Darwinian evolution which is rightly dismissed as slow and random.
Computational selectionist systems can be Lamarckian or the programs
can learn by themselves as well as being selected, so the speed limits
of Darwnian evolution do not apply. The central dogma of molecular
biology also does not apply.

However this does mean that you have to use systems more advanced than
GA or GP to avoid the criticisms of evolutionary systems being
adequate for intelligence.

Will Pearson

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;> &



  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55730493-023635

Reply via email to