James Rogers wrote:
> On 11/2/02 9:29 AM, "Ben Goertzel" <[EMAIL PROTECTED]> wrote:
> > I think there *is* a "general problem of intelligence", and it's an
> > unsolvable problem unless one has infinite computational resources.
>
>
> I don't follow this.  What does infinite computational resources
> have to do
> with it?

OK, let me try to explain...

First of all, to mathematically formalize the AGI problem, one needs to
formally define "intelligence."

There are many ways to do this.  But, for many purposes, any definition of
intelligence that has the general form "Intelligence is the maximization of
a certain quantity, by a system interacting with a dynamic environment" can
be handled in roughly the same way.  It doesn't always matter exactly what
the quantity being maximized is (whether it's "complexity of goals achieved"
, for instance, or something else).  My own definition of intelligence as
"the ability to achieve complex goals in complex environments" -- which I've
also formalized mathematically -- fits in here.

Let's use the term "behavior-based maximization criterion" to characterize
the class of definitions of intelligence indicated in the previous
paragraph.

So, sppose one has some particular behavior-based maximization criterion in
mind.  Then Marcus Hutter's work on the AIXI system, descrigives a software
program that will be able to achieve intelligence according to the given
criterion.

Now, there's a catch: this program may require infinite memory and an
infinitely fast processor to do what it does.  But he also gives a variant
of AIXI which avoids this catch, by restricting attention to programs of
bounded length L.   Loosely speaking, the AIXItl variant will provably be as
intelligent as any other computer program of length <= L, satisfying the
maximization criterion, within a constant multiplicative factor and a
constant additive factor.

Hutter's work draws on a long tradition of research into statistical
learning theory and algorithmic information theory, mostly notably
Solomonoff's early work on induction and Levin's work on computational
measure theory.   At the present time, though, this work is more exciting
theoretically than pragmatically.  The "constant factor" in his theorem may
be very large, so that in practice, AIXItl is not really going to be a good
way to create an AGI software program.  In essence, what AIXItl is doing is
searching the space of all programs of length L, evaluating each one, and
finally choosing the best one and running it.  The "constant factors"
involved deal with the overhead of trying every other possible program
before hitting on the best one!

A simple AI system behaving somewhat similar to AIXItl could be built by
creating a program with three parts:

.       The data store
.       The main program
.       The metaprogram

The operation of the metaprogram would be, loosely, as follows:

.       At time t, place within the data store a record containing: the complete
internal state of the system, and the complete sensory input of the system.

.       Search the space of all programs P of size |P|< L to find the one that,
based on the data in the data store, has the highest expected value for the
given maximization criterion

.       Install P as the main program

Conceptually, the main value of this approach for AGI is that it solidly
establishes the following contention:

**If you accept any definition of intelligence of the general form
"maximization of a certain function of system behavior."
Then, the problem of creating AGI is basically a problem of dealing with the
issues of space and time efficiency**

As with any mathematics-based conclusion, the conclusion only follows if one
accepts the definitions.  If someone's conception of intelligence
fundamentally can't be cast into the form of a behavior-based maximization
criterion, then these ideas aren't relevant for AGI as that person conceives
it.  However, we believe that the behavior-based maximization criterion
approach to defining intelligence is a good one, and hence we believe that
Hutter's work is highly significant.

The limitations of these results should be understood, along with their
power.  For instance, consider Penrose's contention that non-Turing quantum
gravity computing (as allowed by an as-yet unknown noncomputable theory of
quantum gravity) is necessary for true general intelligence.  This idea is
not refuted by Hutter's results, because it's possible that

.       AGI is in principle possible on ordinary Turing hardware

.       AGI is only pragmatically possible, given the space and time constraints
imposed on computers by the physical universe, given quantum gravity powered
computer hardware

I very strongly doubt this is the case, and Penrose has not given any
convincing evidence for such a proposition, but our point is merely that in
spite of recent advances in AGI theory such as Hutter's work, we have no way
of ruling such a possibility out mathematically.

My own AGI work -- like yours, and that of other serious AGI
designers/builders on this list -- deals with ways of achieving reasonable
degrees of intelligence given reasonable amounts of space and time
resources.

> > With finite computational resources there are always going to be some
> > complex goals that one can achieve better than others....
>
>
> This is the "jack of all trades, ace of none" problem.  But a properly
> designed generally intelligent system should be adaptive enough
> that it can
> become highly tuned to solving specific classes of problems if
> that is what
> it is faced with on a regular basis.  Just like people.

Yes.  But given ANY finite system S, the class of all problems that S can
solve, is a zero percentage of the total class of all possible problems that
are mathematically formulable.

And given any finite system S whose information capacity is significantly
less than that of the universe, the class of all problems that S can solve
is a VERY SMALL percentage of the total class of all possible problems
within the universe...

The adaptiveness you're talking about can increase the size of the class of
problems S can solve, but can't solve the fundamental limitations implied by
algorithmic information theory, i.e. that "You can't solve a twenty-pound
problem with a ten-pound AGI program."  [I'm paraphrasing Chaitin's
rendition of Godel's Theorem as "You can't prove a twenty-pound theorem with
a ten-pound formal system"]

> > Hutter and Schmidhuber's mathematical approach to general intelligence
> > basically verifies this idea, in a more formal & theoretical way...
>
>
> I continue to like and follow their work, with some reservations.  My
> reservations mostly are of the nature that some implicit assumptions are
> made without qualification that I can state for a fact should be
> substantially different from the essential implied qualification.
>  Its great
> stuff, just subtly misleading in certain respects in that it causes you to
> ignore things that should be looked at more critically.
>
> Unfortunately, I'm not at liberty to talk about this (or a number of other
> things for that matter) in detail or I would.  I apologize if I
> can't fully
> explain some of the assertions I might make.  Arrgh.  :-/

Well, I think their work is of limited practical value for the reasons I
mention above, but, you're obviously hinting at something else.  But since
you won't tell us, it's not a very interesting topic of conversation huh ;)

ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to