Shane wrote:
> Ben often says that AIXI isn't really that big a deal because it's
> trivial to build a super powerful AI given infinite computational
> resources.  However to the best of my knowledge, Hutter's was the
> first to actually do this and properly analyse the results: AIXI
> is the first precisely defined and provably super powerful fully
> general theoretical AI to be proposed and carefully analysed.
> (Solomonoff only dealt with the more limited sequence prediction
> problem)  In which case it seems, at least to me, that Hutter has
> done something quite significant in terms of the theory of AI.

Well, I feel that this work of Hutter's is conceptually trivial but
mathematically significant.

He has gotten the complicated formalism to cooperate, and rigorously proved
a theorem embodying something that is very intuitively obvious.

I suspect that in a couple decades, the formalism will be more refined, and
theorems similar to Hutter's will be provable in about 10 lines.  That is, I
don't think the theorems are very deep.  They're quite shallow.  They're
hard to prove only because the formalism involved is at an early phase, and
therefore overcomplicated.

This kind of situation has often been seen in the history of mathematics.
For example, consider differential geometry in the late 1900's.  Back then,
a lot of intuitively simple results about three-dimensional surfaces were
proved in extremely complicated and lengthy ways, using coordinate geometry.
Many of the results were intuitively obvious, but were just *bitches* to
prove in terms of the known calculational tools at the time.  An example is
the result that "great circles" are the only geodesics on the sphere.  This
is a intuitively obvious, but the 19'th century proof was rather a pain,
because it was hard to get coordinate geometry to cooperate.  You just had
to do a lot of calculus.  Now, in the context of 20'th century differential
geometry, the proofs of many of these theorems are a few lines long.  The
simplicity of the proofs finally match the intuitive obviousness of the
theorems, because, the field of mathematics is mature.

Not all theorems become very simple when the domain of mathematics in which
they live becomes mature.  Some theorems are fundamentally deep, and are
complex to prove even when all the concepts they involve have been fully and
elegantly worked out.  But Hutter's theorems, I feel, are not of this
nature.

So, yeah, intuitively it's trivial to build a super powerful AI given
infinite computational resources.  And I think the proof of this will be
trivial when the surrounding math concepts are formulated more nicely.  Now
the proof is complicated because the surrounding math concepts are immature
and expressed in overcomplicated ways.

And work like Hutter's is going to be part of the process of arriving at the
"right" formulations of the surrounding mathematical concepts regarding
intelligence, goals, etc.

Anyway, perhaps these comments have made my attitude clearer.

As an AGI designer, Hutter's work tells me NOTHING of any use.  In that
sense I feel it's "not that big a deal."  On the other hand, it's excellent
math/science, and it's part of a large process that may eventually lead to a
deep useful general theory of AGI.  In that sense it's certainly worthwhile.

> Is Hutter's work of practical use?  Well that's an open question
> and only time will tell.

I very seriously it doubt will be of direct practical use anyone, but of
course I don't *know* that.

> Finally about AIXI/AIXItl needing infinite resources.  AIXI contains
> an uncomputable function so that's the end of that.  AIXItl however
> only requires a finite amount of resources to solve any given problem.
> However in general AIXItl requires unbounded resource when we consider
> its ability to face problems of unbounded complexity.  Clearly this
> will be true of any system that is able to effectively deal with
> arbitrary problems -- it will require arbitrary resources.  This is
> not some special property of AIXItl, all systems in this class must
> be like this, in particular if Novemente is able to solve arbitrary
> problems then it must too have access to arbitrary resources.

Yep, that's true.  The problem is that AIXItl in practice is going to
require way too many resources.  In practice the difference between one
finite number and a much larger finite number is quite a meaningful
difference!!

> You can however argue
> that the way in which AIXItl requires resources as the complexity of
> a problem increases is highly non-optimal compared to Novamente.

Yeah.  To me that is quite obvious.

> Perhaps this is true, however the purpose of AIXItl is as a theoretical
> tool to study this class of AI's and their properties.  If we could
> prove that AIXItl or a variant or even a completely different model
> was optimal in this latter sense also then that would be very cool
> indeed and another step towards a truly practical super AI.

That would be significantly harder, and it might not be possible without a
real clarification of the foundational concepts, of the sort that would make
Hutter's existing theorems have 10-line proofs ;-)

But even so, if you proved some model was optimal in this latter sense "up
to a very large multiplicative or additive constant", that still wouldn't be
terribly interesting in practice.

The main thing standing between this sort of theory and practical
applicability is the presence of these extremely large constants, which
obscure (not too effectively!) absurdly impractical searches over huge
spaces.

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to