As Ben said, we agree on many issues --- actually, we have similar opinions whenever talking about any third AI work, that is, beside NARS and Novamente. ;-)

Even NARS and Novamente look similar when compared with other AI works. The major difference between the two, to my eye, are the following.

(1) Ben and I have different working definitions of intelligence, which determine our different research goals. Mine is "to adapt with insufficient knowledge and resources", and his is "to achieve complex goals in complex environments". Consequently, he puts more stress on external capacity, while I put more stress on internal integrity.

(2) The above difference also leads us to different research methodology. To maximize performance, Ben has been trying to integrate different techniques, including reasoning, evolutionary computing, neural network. On the contrary, I have been trying to minimize the design, by using a single technique to achieve as much as possible. The technique I found is a new reasoning system, and before reaching its boundary of capacity, I have no plan to add another technique.

(3) Though both NARS and Novamente do reasoning, and their approaches are similar when compared with other reasoning systems, Ben and I have been arguing for years on what is the right way to calculate degree of belief. Roughly speaking, Ben thinks the right way to go is to base the calculation on probability theory, while I believe probability theory is improper for this case, and would rather explore a way to go beyond the theory.

In summary, I feel that I'm more radical in theory --- I think Ben is still too conservative by staying closer to traditional theories like probability theory, set theory, model-theoretic semantics, and so on. On the other hand, Ben is more radical in engineering --- he thinks I'm too conservative by staying with reasoning only.

I won't restart a debate on the above (1) and (3) --- I don't think we have much new to say on those issues. I'll clarify my position on (2) in a separate email, to avoid a mixture of issues in a single thread.

Pei


----- Original Message ----- From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, September 28, 2004 9:47 AM
Subject: RE: [agi] Computational Feasibility and NARS




james,

> * Similarly, pure logical reasoning systems like NARS are capable of
> general
> intelligence only when supplied with infeasibly much computing power


I don't think this follows.

I would make this assertion about classical logical reasoning systems,
but non-axiomatic systems like NARS are inherently scalable.  If the
amount of computing power required for NARS is "infeasible", it is
because it was poorly engineered in implementation, not because it is
mandated by the algorithmic nature of such systems.

Why axiomatic reasoning systems are intractable while non-axiomatic
reasoning systems are most certainly tractable is a subtly in the
mathematics that seems to be lost on many people every time discussions
of it happen -- I've had this come up many times.  Non-axiomatic models
allow efficient representation and information coding modes that have
no analog in axiomatic models.  This seems to get glossed over,
probably because discussions of these things never leave the very high
conceptual level.

James, Pei and I worked closely together for several years, so it's definitely not true, in our case, that our discussions never left the very high conceptual level ;)

My point was one about dynamics rather than representation.

My point really hinges on the definition of "pure logical reasoning
systems." Novamente is a (probabilistic term logic) logic-based system, in
a sense, but it contains significant components that are fundamentally based
on mathematics other than formal logic. So I don't consider Novamente a
"pure logical reasoning system," even though formal logic does play a
significant role in it.


The point I was raising about NARS, as it's currently formulated, is that it
contains no inference control mechanisms that are capable of, all in one go,
constructing large compound logical terms. Because its control mechanisms
are pretty closely tied to the rules of logic that drive its reasoning, and
the rules of logic as defined in NARS tend to be local in character -- i.e.
each rule combines a couple terms into a new term. Using this kind of
control mechanism, complex compound terms will be built up only in an
incremental way rather than all at once. I believe that this kind of
incremental-agglomeration approach to building complex terms isn't adequate
in general, though it's surely useful for many purposes.


Now, it's definitely possible to insert non-incremental
compound-term-creation methods into the NARS framework --- but then, my
hypothesis is that the conceptual and mathematical basis of these methods
will need to be something other than formal logic.  Formal logic doesn't
contain the concepts needed to derive such methods.

My suspicion is that, if Pei ever builds NARS into a complete implemented
system and tries to do complex higher-order reasoning with it, he'll run
into this problem, and will then add some non-incremental control mechanisms
into his system. (I also think that, at this point, he'll replace his
induction and abduction truth value formulas, but that's another story ;-)


[A side note: For those who aren't familiar with the long-running debates
between Pei Wang and myself, you should know that Pei and I have a lot of
respect for one anothers' AI approaches even though we don't agree on
everything. If I argue with Pei's ideas it's because, unlike most ideas in
the AI field, I actually consider them worth arguing with...]


> * However, I think that evolutionary programming algorithms and logical
> reasoning systems may both be incorporated as components of Artificial
> General Intelligence systems that can achieve decent levels of AGI with
> feasibly much computing power


Again, I don't see how this follows.

Hacking together multiple representations is inherently non-scalable in
computer science, as it forces exponential complexities and really
doesn't allow for any universal "friendly exponent" approximations.
How this can be considered "computationally feasible" while something
like NARS is not does not appear to square with any scalable software
design theory that I'm familiar with.

I'm not proposing combining multiple representations; rather, I'm proposing
that multiple dynamic mechanisms need to exist, acting on the same
representation. Formal logic is a guide for constructing dynamic
mechanisms that incrementally construct new knowledge from existing
knowledge; but there is also a need for dynamic mechanisms that can make big
leaps beyond current knowledge, creating large compound terms speculatively
yet in a not entirely random way.


Novamente has a single knowledge representation but multiple dynamic
mechanisms updating it; some of these are logic-based and some
evolutionary-programming based.  But our logic and our evolutionary
programming are both probabilistically based so everything speaks the same
language; only one knowledge representation is needed.

This isn't really about AGI at all, as these objections would apply to
any kind of ordinary scalable systems software engineering.  I've often
thought that half the problem with AGI wasn't the theory per se but the
lack of good knowledge of what theoretically correct design of the
abstract concepts should look like.  Too much ivory tower, not enough
field engineer.  :-)

In general, I tend to agree with this point. To the extent that AGI requires scale, building an AGI with current technology requires a mastery of the art of building scalable software systems, which is not at all trivial; and is not something that cognitive scientists or AI theorists often know a lot about.

However, if your objections are aimed at my own work, they kinda miss the
mark. The Novamente project is not ivory-tower, it's commercial, and
Novamente has been engineered for scalability by a team with experience
architecting real-world scalable software systems, both narrow-AI-based and
non-AI-related. Some folks have complained that some of the core code is
too efficiency-oriented in its reliance on low-level C design techniques;
but no one who has seen the code and detailed design has ever argued that it
wasn't well-engineered for scalability. The only complaints I've heard have
regarded the steep learning curve in getting used to the codebase, which is
related to the efficiency-orientation of the code. (Don't get me wrong, we
use objects and nice design patterns, but where there's been a compromise
between performance and ease-of-comprehension-for-the-novice, we've often
chosen performance, though we've hidden the performance tricks behind nice
interfaces wherever possible.)


Creating an architecture that supports multiple AI algorithms operating on a
common representation, in an efficient, scalable and maintainable way, has
not been an easy challenge, but I believe we've met it. Part of the key is
that we don't have THAT many AI algorithms --- all our different MindAgent
objects acting on our dynamic knowledge store are based on probability
theory: probabilistic term logic, Bayesian Optimization Algorithm for global
learning, and stochastic local search for some special cases. So we haven't
quite solved the problem of making an architecture that supports an
arbitrary number of generic AI algorithms operating on a common
representation. We started out that way, and then wound up doing a lot of
tuning of system components based on the specific AI algorithms in the
Novamente design.


-- Ben



-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to