Hi Cliff,

I'm not good at math -- I can't follow the AIXI materials and I don't
know what Solomonoff induction is. So it's unclear to me how a
certain goal is mathematically defined in this uncertain, fuzzy
universe.
In AIXI you don't really define a "goal" as such.  Rather you have
an agent (the AI) that interacts with a world and as part of that
interaction the agent gets occasional reward signals.  The agent's
job is to maximise the amount of reward it gets.

So, if the environment contains me and I show the AI chess positions
and interpret its outputs as being moves that the AI wants to make
and then give the AI reward when ever it wins... then you could say
that the "goal" of the system is to win at chess.

Equally we could also mathematically define the relationship between
the input data, output data and the reward signal for the AI.  This
would be a mathematically defined environment and again we could
interpret part of this as being the "goal".

Clearly the relationship between the input data, the output data and
the reward signal has to be in some sense computable for such a system
to work (I say "in some sense" as the environment doesn't have to be
deterministic it just has to have computaionally compressible
regularities).  That might see restrictive but if it wasn't the case
then AI on a computer is simply impossible as there would be no
computationally expressible solution anyway.  It's also pretty clear
that the world that we live in does have a lot of computationally
expressible regularities.


What I'm assuming, at this point, is that AIXI and Solomonoff
induction depend on operation in a "somehow predictable" universe -- a
universe with some degree of entropy, so that its data is to some
extent "compressible".  Is that more or less correct?
Yes, if the universe is not "somehow predicatble" in the sense of
being "compressible" then the AI will be screwed.  It doesn't have
to be prefectly predictable; it just can't be random noise.


And in that case, "goals" can be defined by feedback given to the
system, because the desired behaviour patterns it induces from the
feedback *predictably* lead to the desired outcomes, more or less?
Yeah.


I'd appreciate if someone could tell me if I'm right or wrong on this,
or point me to some plain english resources on these issues, should
they exist.  Thanks.
The work is very new and there aren't, as far as I know, alternate
texts on the subject, just Marcus Hutter's various papers.
I am planning on writing a very simple introduction to Solomonoff
Induction and AIXI before too long that leaves out a lot of the
maths and concentrates on the key concepts.  Aside from being a good
warm up before I start working with Marcus soon, I think it could
be useful as I feel that the real significance of his work is being
missed by a lot of people out there due to all the math involved.

Marcus has mentioned that he might write a book about the subject
at some time but seemed to feel that the area needed more time to
mature before then as there is still a lot of work to be done and
important questions to explore... some of which I am going to be
working on :)


I should add, the example you gave is what raised my questions: it
seems to me an essentially untrainable case because it presents a
*non-repeatable* scenario.
In what sense is it untrainable?  The system learns to win at chess.
It then start getting punished for winning and switches to losing.
I don't see what the problem is.


If I were to give to an AGI a 1,000-page book, and on the first 672
pages was written the word "Not", it may predict that on the 673d page
will be the word "Not.".  But I could choose to make that page blank,
and in that scenario, as in the above, I don't see how any algorithm,
no matter how clever, could make that prediction (unless it included
my realtime brainscans, etc.)
Yep, even an AIXI super AGI isn't going to be psychic. The thing is
that you can never be 100% certain based on finite evidence. This is
a central problem with induction. Perhaps in ten seconds gravity will
suddernly reverse and start to repel rather than attract. Perhaps
gravity as we know it is just a physical law that only holds for the
first 13.7 billion years of the universe and then reverses? It seems
very very very unlikely, but we are not 100% certain that it won't
happen.

Cheers
Shane



-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to