For those people not familiar with the topic, the issue Ben raised is this: an intelligent reasoning system needs a way to generate new concepts by itself (beside obtaining new concept from the outside). Since new concepts cannot come from nowhere, we usually take them as compounds composed from existing concepts (they will later gradually get their own meaning, but that is a separate issue).

Concretely speaking, if the system currently has terms (names of concepts) P, Q, and R (each can be a compound itself), the "incremental" approach in NARS will, under certain situations, get (P and Q) first, then use it and R to get ((P and Q) or R). On the other hand, in Novamente ((P and Q) or R) can be directly generated by evolutionary computing, without necessarily getting (P and Q) first.

Ben's argument is that under a given time/space restriction, the incremental approach may miss certain good compounds obtainable using the evolutionary approach, which I actually agree --- using random crossover and mutation, the evolutionary approach causes more radical changes compared to the incremental approach.

However, "to be able to generate radically new compound" is not necessarily what matters. Given the same set of initial terms coming from the experience of the system, and allowing the same time to generate new compounds, the incremental approach will produce compounds closer to the experience of the system, though may miss good ones too far away, while the evolutionary approach may produce same good ones, but also many compounds which are completely useless.

This is actually what I belief the difference between "intelligence" and "evolution" --- though both are adaptive mechanism, the former makes changes according to the past experience, is incremental, and will be bounded by experience; the latter makes random changes (which will be selected by future experience), is radical and experience-independent. Evolution produces novel structures, by paying the price of long time and dead individuals (with unfortunate changes).

To me, the important thing here is not one or two great ideas, but the average quality of the compounds. Given the same resources, I cannot see why evolution gives a better result in this aspect. Furthermore, I don't know evidence indicating that our mind generate compounds randomly. There are much more pieces of evidence indicating intelligence as a experience-driven mechanism.

A technical issue: Ben seems to see the compound generating in NARS as a hill-climbing, which will be trapped by local maximum values. It is not the case, because to get ((P and Q) or R), (P and Q) does not be evaluated as a "good" compound by the system. It only need to exist before ((P and Q) or R) can be generated.

Pei

----- Original Message ----- From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, September 28, 2004 9:47 AM
Subject: RE: [agi] Computational Feasibility and NARS




james,

> * Similarly, pure logical reasoning systems like NARS are capable of
> general
> intelligence only when supplied with infeasibly much computing power


I don't think this follows.

I would make this assertion about classical logical reasoning systems,
but non-axiomatic systems like NARS are inherently scalable.  If the
amount of computing power required for NARS is "infeasible", it is
because it was poorly engineered in implementation, not because it is
mandated by the algorithmic nature of such systems.

Why axiomatic reasoning systems are intractable while non-axiomatic
reasoning systems are most certainly tractable is a subtly in the
mathematics that seems to be lost on many people every time discussions
of it happen -- I've had this come up many times.  Non-axiomatic models
allow efficient representation and information coding modes that have
no analog in axiomatic models.  This seems to get glossed over,
probably because discussions of these things never leave the very high
conceptual level.

James, Pei and I worked closely together for several years, so it's definitely not true, in our case, that our discussions never left the very high conceptual level ;)

My point was one about dynamics rather than representation.

My point really hinges on the definition of "pure logical reasoning
systems." Novamente is a (probabilistic term logic) logic-based system, in
a sense, but it contains significant components that are fundamentally based
on mathematics other than formal logic. So I don't consider Novamente a
"pure logical reasoning system," even though formal logic does play a
significant role in it.


The point I was raising about NARS, as it's currently formulated, is that it
contains no inference control mechanisms that are capable of, all in one go,
constructing large compound logical terms. Because its control mechanisms
are pretty closely tied to the rules of logic that drive its reasoning, and
the rules of logic as defined in NARS tend to be local in character -- i.e.
each rule combines a couple terms into a new term. Using this kind of
control mechanism, complex compound terms will be built up only in an
incremental way rather than all at once. I believe that this kind of
incremental-agglomeration approach to building complex terms isn't adequate
in general, though it's surely useful for many purposes.


Now, it's definitely possible to insert non-incremental
compound-term-creation methods into the NARS framework --- but then, my
hypothesis is that the conceptual and mathematical basis of these methods
will need to be something other than formal logic.  Formal logic doesn't
contain the concepts needed to derive such methods.

My suspicion is that, if Pei ever builds NARS into a complete implemented
system and tries to do complex higher-order reasoning with it, he'll run
into this problem, and will then add some non-incremental control mechanisms
into his system. (I also think that, at this point, he'll replace his
induction and abduction truth value formulas, but that's another story ;-)


[A side note: For those who aren't familiar with the long-running debates
between Pei Wang and myself, you should know that Pei and I have a lot of
respect for one anothers' AI approaches even though we don't agree on
everything. If I argue with Pei's ideas it's because, unlike most ideas in
the AI field, I actually consider them worth arguing with...]


> * However, I think that evolutionary programming algorithms and logical
> reasoning systems may both be incorporated as components of Artificial
> General Intelligence systems that can achieve decent levels of AGI with
> feasibly much computing power


Again, I don't see how this follows.

Hacking together multiple representations is inherently non-scalable in
computer science, as it forces exponential complexities and really
doesn't allow for any universal "friendly exponent" approximations.
How this can be considered "computationally feasible" while something
like NARS is not does not appear to square with any scalable software
design theory that I'm familiar with.

I'm not proposing combining multiple representations; rather, I'm proposing
that multiple dynamic mechanisms need to exist, acting on the same
representation. Formal logic is a guide for constructing dynamic
mechanisms that incrementally construct new knowledge from existing
knowledge; but there is also a need for dynamic mechanisms that can make big
leaps beyond current knowledge, creating large compound terms speculatively
yet in a not entirely random way.


Novamente has a single knowledge representation but multiple dynamic
mechanisms updating it; some of these are logic-based and some
evolutionary-programming based.  But our logic and our evolutionary
programming are both probabilistically based so everything speaks the same
language; only one knowledge representation is needed.

This isn't really about AGI at all, as these objections would apply to
any kind of ordinary scalable systems software engineering.  I've often
thought that half the problem with AGI wasn't the theory per se but the
lack of good knowledge of what theoretically correct design of the
abstract concepts should look like.  Too much ivory tower, not enough
field engineer.  :-)

In general, I tend to agree with this point. To the extent that AGI requires scale, building an AGI with current technology requires a mastery of the art of building scalable software systems, which is not at all trivial; and is not something that cognitive scientists or AI theorists often know a lot about.

However, if your objections are aimed at my own work, they kinda miss the
mark. The Novamente project is not ivory-tower, it's commercial, and
Novamente has been engineered for scalability by a team with experience
architecting real-world scalable software systems, both narrow-AI-based and
non-AI-related. Some folks have complained that some of the core code is
too efficiency-oriented in its reliance on low-level C design techniques;
but no one who has seen the code and detailed design has ever argued that it
wasn't well-engineered for scalability. The only complaints I've heard have
regarded the steep learning curve in getting used to the codebase, which is
related to the efficiency-orientation of the code. (Don't get me wrong, we
use objects and nice design patterns, but where there's been a compromise
between performance and ease-of-comprehension-for-the-novice, we've often
chosen performance, though we've hidden the performance tricks behind nice
interfaces wherever possible.)


Creating an architecture that supports multiple AI algorithms operating on a
common representation, in an efficient, scalable and maintainable way, has
not been an easy challenge, but I believe we've met it. Part of the key is
that we don't have THAT many AI algorithms --- all our different MindAgent
objects acting on our dynamic knowledge store are based on probability
theory: probabilistic term logic, Bayesian Optimization Algorithm for global
learning, and stochastic local search for some special cases. So we haven't
quite solved the problem of making an architecture that supports an
arbitrary number of generic AI algorithms operating on a common
representation. We started out that way, and then wound up doing a lot of
tuning of system components based on the specific AI algorithms in the
Novamente design.


-- Ben



-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to