Pei,
> However, "to be able to generate radically new compound" is not
> necessarily
> what matters. Given the same set of initial terms coming from the
> experience
> of the system, and allowing the same time to generate new compounds, the
> incremental approach will produce compounds closer to the
> experience of the
> system, though may miss good ones too far away, while the evolutionary
> approach may produce same good ones, but also many compounds which are
> completely useless.
My contention is that the incremental approach will take unacceptably long
to generate the compounds needed to solve nontrivially complex practical
problems.
Whereas, the evolutionary approach is able to generate the compounds needed
to solve nontrivially complex practical problems in many cases.
To take a simple problem as an analogy: finding an optimal (or near-optimal)
physically-laid-out circuit implementing a given logical design is a problem
solvable by evolutionary algorithms (as John Koza has shown), but no one has
been able to solve it using pure incremental inference approaches, even
though the whole problem is very easily specified in logical terms. Formal
logic has been used to check the correctness of circuits but not to generate
new ones.
[I add that Koza's evolutionary algorithms are a lot cruder than the ones in
Novamente, which are based on Bayesian optimization and interface closely
with Novamente's probabilistic reasoning system.]
Another relevant fact is that, in the automated theorem-proving literature,
so far the best theorem-provers succeed at proving nontrivial theorems only
when used interactively by humans. The human directs the activity of the
theorem-prover at crucial junctures -- i.e. telling it which compound terms
to form....
> This is actually what I belief the difference between "intelligence" and
> "evolution" --- though both are adaptive mechanism, the former
> makes changes
> according to the past experience, is incremental, and will be bounded by
> experience; the latter makes random changes (which will be selected by
> future experience), is radical and experience-independent. Evolution
> produces novel structures, by paying the price of long time and dead
> individuals (with unfortunate changes).
Whereas, I think that evolution is a dynamic that plays a role in human
intelligence as well as ecosystems, and can profitably play a role in AI
systems as well.
> To me, the important thing here is not one or two great ideas, but the
> average quality of the compounds.
That's just not true -- for instance, what if you and I each generate 10
designs for an AGI system, and yours are all of quality 5, whereas 9 of mine
are of quality 0 and one of mine is of quality 10? Clearly, in that case,
I've done a better job of thinking about AGI than you! Assuming there is a
selection algorithm in play that can pick which ideas to use adaptively.....
>Given the same resources, I
> cannot see why
> evolution gives a better result in this aspect. Furthermore, I don't know
> evidence indicating that our mind generate compounds randomly. There are
> much more pieces of evidence indicating intelligence as a
> experience-driven
> mechanism.
Well, evolutionary learning is a variety of experience-based learning.
Look at how BOA (the evolutionary algorithm we use in Novamente) works. You
want to generate a computer program (represented in Novamente by a
combinator tree) to achieve some goal. So you look at prior programs that
have tried to achieve the goal and succeeded modestly or failed, and you
create probabilistic models of the characteristics of the modestly
successful ones. Then you use these models to generate new programs. Yes
there's a probabilistic aspect -- but no more so than in the judgment that
since about 50% of people you've seen in the past were male, similarly
probably about 50% of people you've seen in the future will be male. It's
not a matter of "random generation," it's a matter of the assumption that
probabilistic patterns in the behavior of programs already observed will
likely also apply to new programs. Random generation may be used to create
an initial population of programs, but that's no different than if, when
searching for a path down from a mountain, I begin by wandering around the
peak to see what I can see -- gathering initial experience which I can then
use to guide the more structured and useful part of my search process.
> A technical issue: Ben seems to see the compound generating in NARS as a
> hill-climbing, which will be trapped by local maximum values. It
> is not the
> case, because to get ((P and Q) or R), (P and Q) does not be
> evaluated as a
> "good" compound by the system. It only need to exist before ((P
> and Q) or R)
> can be generated.
I think this aspect of NARS is like a kind of stochastic hill-climbing.
Because I think NARS is not going to keep (P AND Q) in its active memory
very long if it's not judged as "good" by the system. Is it? How does
NARS, in the current conception, decide what to keep around and what not to?
What am I misunderstanding here?
-- Ben
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]