Re: [agi] AIXI and Solomonoff induction

2003-02-21 Thread Philip Sutton
Ed,

 From my adventures in physics, I came to the conclusion that my
 understanding of the physical world had more to do with 1. My ability
 to create and use tools for modeling, i.e. from the physical tools of
 an advanced computer system to my internal abstraction tools like a
 new theorem of group algebra that helps me organize the particle world,
 2. My internal mechanism for modeling, i.e. my internal neural
 structure, than it had to do with any 'physical reality'. 

Isn't the deterministic universe a working hypothesis that drives a lot of 
technological development and science?  In other words we expect to 
find regularities and causal webs when we know enough about the 
system?

It seems to me that we can't tell at this point whether we live in a 
universe that is deterministic all the way down.  The permanently 
inevitable limits on our perception, modelling skills and depth of 
knowledgebase prevent us from developing a fully deterministic model 
for all issues based on modelling all details of the universe down to the 
finest detail.  So for most questions we must simplify and work with 
black boxes at all sorts of levels.  This means that the statistical 
probablistic approach works best for lots of issues but as our 
knowledgebase, perception and modelling skills improve we can apply 
approximate deterministic approaches to more things.

My guess is that if, as we or AGIs improve our knowledgebase, 
perception and modelling skills that we find that 'we' can apply 
approximately deterministic models to explain more and more and 
more things that previously had to be grappled with using statistical 
probablisitic approaches then I would say that strengthens the value of 
the deterministic-universe working hypothesis - but of course since we 
can never model the whole universe in full detail while we are within 
the universe iteslf then we will never know whether at bottom it really is 
deterministic or probablistic - this is the Pooh bear problem.  Is there 
really cheese at the bottom of the honey jar?  Can't tell till you get 
there.

I once skimmed a book that claimed  we are actually artifacts living in 
some other being's simulation which was supposedly why the 
newtonian work of day to day life gives way to the probablisitic 
quantum world.  :)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AIXI and Solomonoff induction

2003-02-20 Thread Ed Heflin
 In fact Physics are not random. But let's go a little further,
 and here's what I want to say.

 Physics are deterministic. Deterministic means that given a
 system in one state, the following state can be inferred by
 applying physics rules. It also works backwards: a given state
 has only one previous state - but that is more difficult to see.
 This can be seen more clearly by absurd: If Physics were not
 deterministic, then things would happen without any cause.
 The first author to talk about this was Descartes.

 So, if we had all the information of the universe in a specific
 instant, then all the past and future could be calculated.


What faith...and if I were a TV evangelist this would probably be a good
thing, but as a physicist, this turns out to be a bad thing.  Nothing in
physics should be taken on faith, except of course your initial assumptions
;-)

Faith aside, the idea that Physics are deterministic...is both a statement
of faith and a falsifiable conjecture beyond Newtonian Physics.

Let me explain.  The story of the evolution of determinism in physics is a
story that begins with Newton who, perhaps working off the philosophical
legacy of Descartes and the mathematical influence of von Liebnitz, was able
to formulate his Universal Law of Gravitation and in the process
single-handedly solve the so-called two-body problem, e.g. the orbital
motion of the Earth around the Sun.  Unfortunately, he was unable to do the
same for the so-called three-body problem, e.g. the orbital motion of the
Earth and Mars around the Sun.

The key in the strictly deterministic Newtonian framework is to find a
sufficient number of conserved quantities. Energy, momentum, angular
momentum, pointing vectors and the like.  These conserved quantities are
used to specify and solve integral equations that characterize the
two-body problem.  Even though some of the best minds of Newton's era,
Newton included, and subsequent eras worked on the problem, it remained
unsolved...and remains so today.

The reason for this is that there are not enough conserved quantities in the
three-body problem to make it ``integrable''.  If there are enough
conserved quantities, and thereby enough integrals, the motion problem is
completely specified and solved.  The motion is then described as
quasi-periodic motion, or interdependent periodic motions, specified by a
motion in phase space that lies on a multi-dimensional torus.

Applying your definition of Deterministic, meaning given a system in one
state, the following state can be inferred by applying physics rules, to
such a problem, forces us to dilute or water down the problem by
respecifying those states considered or including constraints.  Solutions to
the constrained three-body problem, e.g. specifying that the orbital
motion of the Earth and Mars around the Sun lies in a plane, are in fact
``integrable'' and often used in guidance systems for interplanetary
spacecraft trajectories (along with empirical models to help specify the
nine-body problem ;-).

Furthermore, it has been shown that the deterministic framework of Newton's
Universal Law of Gravitation is an approximation for a more complete
theory of gravitation via the tensor-deterministic framework of Einstein's
General Relativity.  However, I don't recommend that you try to solve
Einstein's General Relativity just to get your spacecraft to go where you
want it to go.  Even with advanced software tools like Maple, MatLab,
Mathematica, etc. it might take hours to crank out solutions for simple
problems and sometimes problems that were thought straight forward turn out
to yield solutions only after you pay the ultimate price in
patience...depending on the speed of your computer, this could mean
receiving your answer as an epitaph on your gravestone. R.I.P.

No, it seems to me that Newton's brilliant world model with strict
deterministic physics has little import beyond the two-body problem, after
all who wants to play 2-ball billiards?  But deterministic physics aside,
more than anything else, Newton's legacy is one characterized as a 1.
Mathematical...co-founder of calculus, 2. Experimental...remember the apple
;-), and 3. Universal...father of modern physics...approach.  This still
remains the cornerstone of modern physics today, although ALOT about the
actual physical models have changed since.  And whatever shortcomings Newton
had in his deterministic physics Weltansicht, he was more than compensated
by his logical, methodical, and scientific approach...things I'm sure that
would have made Aristotle or René proud of.

What was left in Newton's wake, quite literally, was a mathematical approach
to a quasi-deterministic world that the engineers, experimentalists, and
empiricists of 18 century England couldn't quite figure out.  Here, I'm
referring to the early Englishmen trying to design more efficient steam
engines (so they could deliver more power to the factories of England and
steam the so-called industrial revolution 

Re: [agi] AIXI and Solomonoff induction

2003-02-15 Thread Shane Legg

The other text book that I know is by Cristian S. Calude, the Prof. of
complexity theory that I studied under here in New Zealand.  A new
version of this book just recently came out.  Going by the last version,
the book will be somewhat more terse than the Li and Vitanyi book and
thus more appropriate for professional mathematicans who are used to
that sort of style.  The Li and Vitanyi book is also a lot broader in
its content thus for you I'd recommend the Li and Vitanyi book which
is without doubt THE book in the field, as James already pointed out.

There should be a new verson (third edition) of Li and Vitanyi sometime
this year which will be interesting.  Li and Vitanyi have also written
quite a few introductions to the basics of the field many of which you
should be able to find on the internet.

Cheers
Shane


The Li and Vitanyi book is actually intended to be a graduate-level text in
theoretical computer science (or so it says on the cover) and is formatted
like a math textbook.  It assumes little and pretty much starts from the
beginning of the field; you should have no problems accessing the content.

It is a well-written book, which is a good thing since it is sort of THE
text for the field with few other choices.

Cheers,

-James Rogers
 [EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AIXI and Solomonoff induction

2003-02-15 Thread Shane Legg

Hi Cliff,

Sorry about the delay... I've been out sailing watching the America's
Cup racing --- just a pity my team keeps losing to the damn Swiss! :(

Anyway:

Cliff Stabbert wrote:


SL This seems to be problematic to me.  For example, a random string
SL generated by coin flips is not compressible at all so would you
SL say that it's alive?

No, although it does take something living to flip the coins; but
presumably it's non-random (physically predictable by observing
externals) from the moment the coin has been flipped.  The decision to
call heads or tails however is not at all as *easily* physically
predictable, perhaps that's what I'm getting at.  But I understand
your point about compressibility (expanded below).


Well I could always build a machine to flip coins...  Or pulls lottery
balls out of a spinning drum for that matter.

Is such a thing predictable, at least in theory?  I have read about
this sort of thing before but to be perfectly honest I don't recall
the details... perhaps the Heisenberg principle makes it impossible
even in theory.  You would need to ask a quantum physicist I suppose.



more and more quickly: the tides are more predictable than the
behaviour of an ant, the ants are more predictable than a wolf, the
wolves are more predictable than a human in 800 B.C., and the human in
800 B.C. is more predictable than the human in 2003 A.D.

In that sense, Singularity Theory seems to be a statement of the
development of life's (Kolmogorov?) complexity over time.


Well it's hard to say actually.  An ant is less complex than a human,
but an ant really only makes sense in the context of the nest that it
belongs to and, if I remember correctly, the total neural mass of some
ants nests is about the same as that of a human brain.  Also whales
have much larger brains than humans and so are perhaps more complex
in some physical sense at least.

A lot of people in complexity believed that there was an evolutionary
pressure driving system to become more complex.  As far as I know there
aren't any particularly good results in this direction -- though I don't
exactly follow it much.

Cheers
Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Shane Legg

Hi Cliff,


So Solomonoff induction, whatever that precisely is, depends on a
somehow compressible universe.  Do the AIXI theorems *prove* something
along those lines about our universe,


AIXI and related work does not prove that our universe is compressible.
Nor do they need to.  The sun seems to come up most days, the text in
this email is clearly compressible, laws of chemistry, biology, physics,
economics and so on seem to work.  So in short: our universe is VASTLY
compressible.



or do they *assume* a
compressible universe (i.e. do they state IF the universe is somehow
compressible, these algorithms (given infinite resources) can figure
out how)?


They assume that the environment (or universe) that they have to deal
with is compressible.  If it wasn't they (and indeed any computer based
AI system) would be stuffed.  However that's not a problem as the real
world is clearly compressible...



Assuming the latter, does that mean that there is a mathematical
definition of 'pattern'?  As I stated I'm not a math head, but with
what little knowledge I have I find it hard to imagine pattern as a
definable entity, somehow.


Yes, there is a mathematical definition of 'pattern' (in fact there
are a few but I'll just talk about the one that is important here).
It comes from Kolmogorov complexity theory and is actually quite
simple.  Essentially it says that something is a pattern if it has
an effective description (i.e. computer program for a Turing machine)
that is significantly shorter than just describing the thing in full
bit by bit.  So for example:

For x = 1 to 1,000,000,000,000
   print 1
Next

Describes a string of a trillion 1's.  The description (i.e. the
lenght of this program above in bits) is vastly shorter than a
trillion and so a string of a trillion 1's is highly compressible
and has a strong pattern.

On the other hand if I flipped a coin a trillion times and used
that to generate a string of 0's and 1's, it would be exceedingly
unlikely that the resulting string would have any description much
shorter than just listing the whole thing 01000010010010101...
Thus this is not compressible and has no pattern -- it's random.

There is a bit more to the story than that but not a lot more.



OK, let's say you reward it for winning during the first 100 games,
then punish it for winning / reward it for losing during the next 100,
reward it for winning the next 100, etc.  Can it perceive that pattern?


Clearly this pattern is computationally expressible as so it's no
problem at all.  Of course it will take the AI a while to work out
the rules of the game and on game 101 it will be surprised to be
punished for winning.  And probably for games 102 and a few more.
After a while it will lose a game and realise that it needs to start
losing games.  At game 201 is will probably again get a surprise
when it's punished for losing and will take a few games to realise
that it needs to start winning again.  By game 301 is will suspect
that it need to start losing again and will switch over very quickly.
By game 401 it would probably switch automatically as it will see
the pattern.  Essentially this is just another rule in the game.

Of course these are not exact numbers, I'm just giving you an idea
of what would in fact happen if you had an AIXI system.



Given infinite resources, could it determine that I am deciding to
punish or reward a win based on a pseudo-random (65536-cyclic or
whatever it's called) random number generator?


Yes.  It's pseudo-random and thus computationally expressible
and so again it's no problem for AIXI.  In fact AIXItl would
solve this just fine with only finite resources.



And if the compressibility of the Universe is an assumption, is
there a way we might want to clarify such an assumption, i.e., aren't
there numerical values that attach to the *likelihood* of gravity
suddenly reversing direction; numerical values attaching to the
likelihood of physical phenomena which spontaneously negate like the
chess-reward pattern; etc.?


This depends on your view of statistics and probability.  I'm a
Bayesian and so I'd say that these things depend on your prior
and how much evidence you have.  Clearly the evidence that gravity
stays the same is rather large and so the probability that it's
going to flip is extremely super hyper low and the prior doesn't
matter to much...



In fact -- would the chess-reward pattern's unpredictability *itself*
be an indication of life?  I.e., doesn't Ockham's razor fail in the
case of, and possibly *only* in the case of, conscious beings*?


I don't see what you are getting at here.  You might need to explain
some more. (I understand Ockham's razor, you don't need to explain
that part; actually it comes up a lot in the theory behind Solomonoff
induction and AIXI...)

Thanks for your comments.

Cheers
Shane


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL 

Re: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Shane Legg
Cliff Stabbert wrote:


[On a side note, I'm curious whether and if so, how, lossy compression
might relate.  It would seem that in a number of cases a simpler
algorithm than expresses exactly the behaviour could be valuable in
that it expresses 95% of the behaviour of the environment being
studied -- and if such an algorithm can be derived at far lower cost
in a certain case, it would be worth it.  Are issues like this
addressed in the AIXI model or does it all deal with perfect
prediction?]


Yes, stuff like this comes up a lot in MDL work which can be viewed
as a computable approximation to Solomonoff induction.  Perhaps at
some point a more computable version of AIXItl might exist that is
similar in this sense.

Some results do exist on the relationship between Kolmogorov complexity
and lossy compression but I can't remember much about it off the top of
my head (I'm only just getting back into the whole area myself after a
number of years doing other things!)



What I'm getting at is an attempt at an external definition or at
least telltale of conscious behaviour as either that which is not
compressible or that which although apparently compressible for some
period, suddenly changes later or perhaps that which is not
compressible to less than X% of the original data where X is some
largeish number like 60-90.


This seems to be problematic to me.  For example, a random string
generated by coin flips is not compressible at all so would you
say that it's alive?  Back in the mid 90's when complexity theory
was cool for a while after chaos theory there was a lot of talk
about the edge of chaos.  One way to look at this is to say that
alive systems seem to have some kind of a fundamental balance between
randomness and extreme compressibility.  To me this seems obvious and
I have a few ideas on the matter.  Many others investigated the subject
but as far as I know never got anywhere.

Chaitin, one of the founders of Kolmogorov complexity theory did
some similar work some time ago,

http://citeseer.nj.nec.com/chaitin79toward.html



The reason I'm thinking in these terms is because I suspected Ockham's
razor to relate to the compressibility idea as you stated; and I've


Sounds to me like you need to read Li and Vitanyi's book on
Kolmogorov complexity theory :)

http://www.cwi.nl/~paulv/kolmogorov.html

Cheers
Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AIXI and Solomonoff induction

2003-02-11 Thread Cliff Stabbert
Tuesday, February 11, 2003, 9:44:31 PM, Shane Legg wrote:

SL However even within this scenario the concept of fixed goal is
SL something that we need to be careful about.  The only real goal
SL of the AIXI system is to get as much reward as possible from its
SL environment.  A goal is just our description of what that means.
SL If the AI gets reward for winning at chess then quickly it will get
SL very good at chess.  If it then starts getting punished for winning
SL it will then quickly switch to losing at chess.  Has the goal of
SL the system changed?  Perhaps not.  Perhaps the goal always was:
SL Win at chess up to point x in time and then switch to losing.
SL So we could say that the goal was always fixed, it's just that up
SL to point x in time the AI thought the goal was to alway win and it
SL wasn't until after point x in time that it realised that the real
SL goal was actually slightly more complex.  In which case does it make
SL any sense to talk about AIXI as being limited by having fixed goals?
SL I think not.

Perhaps someone can clarify some issues for me.

I'm not good at math -- I can't follow the AIXI materials and I don't
know what Solomonoff induction is.  So it's unclear to me how a
certain goal is mathematically defined in this uncertain, fuzzy
universe. 

What I'm assuming, at this point, is that AIXI and Solomonoff
induction depend on operation in a somehow predictable universe -- a
universe with some degree of entropy, so that its data is to some
extent compressible.  Is that more or less correct?

And in that case, goals can be defined by feedback given to the
system, because the desired behaviour patterns it induces from the
feedback *predictably* lead to the desired outcomes, more or less?

I'd appreciate if someone could tell me if I'm right or wrong on this,
or point me to some plain english resources on these issues, should
they exist.  Thanks.

--
Cliff

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AIXI and Solomonoff induction

2003-02-11 Thread Shane Legg

Hi Cliff,


I'm not good at math -- I can't follow the AIXI materials and I don't
know what Solomonoff induction is.  So it's unclear to me how a
certain goal is mathematically defined in this uncertain, fuzzy
universe. 

In AIXI you don't really define a goal as such.  Rather you have
an agent (the AI) that interacts with a world and as part of that
interaction the agent gets occasional reward signals.  The agent's
job is to maximise the amount of reward it gets.

So, if the environment contains me and I show the AI chess positions
and interpret its outputs as being moves that the AI wants to make
and then give the AI reward when ever it wins... then you could say
that the goal of the system is to win at chess.

Equally we could also mathematically define the relationship between
the input data, output data and the reward signal for the AI.  This
would be a mathematically defined environment and again we could
interpret part of this as being the goal.

Clearly the relationship between the input data, the output data and
the reward signal has to be in some sense computable for such a system
to work (I say in some sense as the environment doesn't have to be
deterministic it just has to have computaionally compressible
regularities).  That might see restrictive but if it wasn't the case
then AI on a computer is simply impossible as there would be no
computationally expressible solution anyway.  It's also pretty clear
that the world that we live in does have a lot of computationally
expressible regularities.



What I'm assuming, at this point, is that AIXI and Solomonoff
induction depend on operation in a somehow predictable universe -- a
universe with some degree of entropy, so that its data is to some
extent compressible.  Is that more or less correct?


Yes, if the universe is not somehow predicatble in the sense of
being compressible then the AI will be screwed.  It doesn't have
to be prefectly predictable; it just can't be random noise.



And in that case, goals can be defined by feedback given to the
system, because the desired behaviour patterns it induces from the
feedback *predictably* lead to the desired outcomes, more or less?


Yeah.



I'd appreciate if someone could tell me if I'm right or wrong on this,
or point me to some plain english resources on these issues, should
they exist.  Thanks.


The work is very new and there aren't, as far as I know, alternate
texts on the subject, just Marcus Hutter's various papers.
I am planning on writing a very simple introduction to Solomonoff
Induction and AIXI before too long that leaves out a lot of the
maths and concentrates on the key concepts.  Aside from being a good
warm up before I start working with Marcus soon, I think it could
be useful as I feel that the real significance of his work is being
missed by a lot of people out there due to all the math involved.

Marcus has mentioned that he might write a book about the subject
at some time but seemed to feel that the area needed more time to
mature before then as there is still a lot of work to be done and
important questions to explore... some of which I am going to be
working on :)



I should add, the example you gave is what raised my questions: it
seems to me an essentially untrainable case because it presents a
*non-repeatable* scenario.


In what sense is it untrainable?  The system learns to win at chess.
It then start getting punished for winning and switches to losing.
I don't see what the problem is.



If I were to give to an AGI a 1,000-page book, and on the first 672
pages was written the word Not, it may predict that on the 673d page
will be the word Not..  But I could choose to make that page blank,
and in that scenario, as in the above, I don't see how any algorithm,
no matter how clever, could make that prediction (unless it included
my realtime brainscans, etc.)


Yep, even an AIXI super AGI isn't going to be psychic.  The thing is
that you can never be 100% certain based on finite evidence.  This is
a central problem with induction.  Perhaps in ten seconds gravity will
suddernly reverse and start to repel rather than attract.  Perhaps
gravity as we know it is just a physical law that only holds for the
first 13.7 billion years of the universe and then reverses?  It seems
very very very unlikely, but we are not 100% certain that it won't
happen.

Cheers
Shane



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]