Hi all,

Some of the discussion seems to take the tone of Novamente vs.
AIXI/AIXI-tl in practical terms. This simply doesn't make much
sense as AIXI is just a theoretical model. It would be meaningful
to ask:

a) Is AIXI a good theoretical model of what it is to be an AGI? i.e.
does it properly define the elusive concept of "general intelligence"
and does the AIXI system demonstrate that at some theoretical level
such a thing is both logically consistent and amazingly powerful?
Is it good enough to act as the very definition of what it is to be
a super powerful AGI? If the answer is "no" then why? If the answer
is "yes" then in AIXI we have a precise mathematical definition of
what super AGI is in which case;

b) How good a practical system is Novamente when viewed from the
perspective of the AIXI theoretical model of AGI?


There have been some comments made about AIXI having a fixed goal.
As Eliezer pointed out, there isn't some fixed goal built into
AIXI, it's just a part of the system's analysis: *if* you had
some kind of fixed goal in mind and just threw the AIXI system
at it how would it cope? ... and the answer is that it would
quickly work out what the goal was and then kick butt.

However even within this scenario the concept of "fixed goal" is
something that we need to be careful about. The only real goal
of the AIXI system is to get as much reward as possible from its
environment. A "goal" is just our description of what that means.
If the AI gets reward for winning at chess then quickly it will get
very good at chess. If it then starts getting punished for winning
it will then quickly switch to losing at chess. Has the goal of
the system changed? Perhaps not. Perhaps the goal always was:
Win at chess up to point x in time and then switch to losing.
So we could say that the goal was always fixed, it's just that up
to point x in time the AI thought the goal was to alway win and it
wasn't until after point x in time that it realised that the real
goal was actually slightly more complex. In which case does it make
any sense to talk about AIXI as being limited by having fixed goals?
I think not.


Ben often says that AIXI isn't really that big a deal because it's
trivial to build a super powerful AI given infinite computational
resources. However to the best of my knowledge, Hutter's was the
first to actually do this and properly analyse the results: AIXI
is the first precisely defined and provably super powerful fully
general theoretical AI to be proposed and carefully analysed.
(Solomonoff only dealt with the more limited sequence prediction
problem) In which case it seems, at least to me, that Hutter has
done something quite significant in terms of the theory of AI.
Is Hutter's work of practical use? Well that's an open question
and only time will tell.


Finally about AIXI/AIXItl needing infinite resources. AIXI contains
an uncomputable function so that's the end of that. AIXItl however
only requires a finite amount of resources to solve any given problem.
However in general AIXItl requires unbounded resource when we consider
its ability to face problems of unbounded complexity. Clearly this
will be true of any system that is able to effectively deal with
arbitrary problems -- it will require arbitrary resources. This is
not some special property of AIXItl, all systems in this class must
be like this, in particular if Novemente is able to solve arbitrary
problems then it must too have access to arbitrary resources.

The flip side is that if we restrict the complexity of the problems
that AIXItl faces to be finite (in an appropriate way), then it only
ever requires finite resources just like Novamente. The point of all
this is this: The finiteness or lack of finiteness that AIXItl
requires is nothing especially bad in the sense that no system can do
better in these terms. Thus talking about AIXItl being irrelevant
because it requires infinite resources doesn't make any sense as
Novamente could not do any better in terms of requiring unbounded
resources when faced with arbitrary problems. You can however argue
that the way in which AIXItl requires resources as the complexity of
a problem increases is highly non-optimal compared to Novamente.
Perhaps this is true, however the purpose of AIXItl is as a theoretical
tool to study this class of AI's and their properties. If we could
prove that AIXItl or a variant or even a completely different model
was optimal in this latter sense also then that would be very cool
indeed and another step towards a truly practical super AI.

Ok... email is getting too long now....

bye

:)

Shane

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to