Bruno, there are so many misleading or unclear statements
in your post - I do not even know where to start. I'll insert a few
> Subject: Re: Predictions & duplications
> From: Marchal <[EMAIL PROTECTED]>
> Juergen Schmidhuber wrote:
> >We need a prior probability distribution on possible histories.
> OK. I agree with that. But of course we differ on the meaning of
> "possible histories". And we tackle also the "prior probability"
> in quite different ways.
> >Then, once we have observed a past history, we can use Bayes' rule to
> >compute the probability of various futures, given the observations. Then
> >we predict the most probable future.
> It seems to me that Bayes' rule is what we *need* to build a plausible
> *past* before, and concerning the future, and even the present, I prefer
> a theory, or a philosophy (or religion), hoping it can put light on
> the unknown.
> >The algorithmic TOE paper discusses several formally describable priors
> >computable in the limit. But for the moment let me focus on my favorite,
> >the Speed Prior.
> Mmh... You know that there exist class of infinitely speedable
> programs (and theories), and even that those class correspond to a
> superclass of universal machines. This entails the UD,
> the Universal Dovetailer, which I like to call sometimes
> the 'splashed Universal Turing Machine',
> (which btw makes UD* sort of denotational semantics of the UTM) could
> generate and execute quicker and quicker version of itself.
> This means you need a non universal ontology.
> For helping other to understand the point I
> propose to await discussing this point after I give the solution of the
> problem in "diagonalisation 1":
> >Assume the Great Programmer has a resource problem, just like any
> >other programmer.
> You told Wei Dai not taking the Great Programmer to seriously.
> I am not sure I can imagine the Great Programmer having resource
> problem. Unless you are a finitist! Knowing that the UTM emulating
> the UD will use an unboundable part of its tape ...
> How could the GP have a resource problem when he is the one
> building universes in which resource problem can happen (apparently)?
> Are you not presuposing some physical structure in advance somewhere?
You can be a GP. Write a program that computes all universes
computable in the limit. Add storage in case of storage overflow.
Still, you as a GP have a resource problem. At some point it will
get harder and harder for you to keep computing all the things
you'd like to compute.
The beings evolving in your universes won't be able to deduce much about
the physics of your world in which your computer is built. Just like I
cannot say much about the environment of the particular GP who programmed
us. That's where religion starts, and where algorithmic TOEs stop
> >According to the Speed Prior, the harder it is to compute
> >something, the less likely it gets. More precisely, the cumulative prior
> >probability of all data whose computation costs at least O(n) time is
> >at most inversely proportional to n.
> Very interesting idea. I would say: the harder it is to compute
> something FROM here and now the less likely it gets from here and now.
Same thing. The Speed Prior leads to conditional probabilities of futures,
given the past.
> And I would
> add that this is only true if your neighborhood is less complex than
> yourself. I really believe your idea is interesting for some low level
> computational physics. But when agent, including yourself, enters in the
> play, its another matter, because it is intrinsically harder to compute
> things then.
I do not understand. But please do _not_ elaborate.
> >To evaluate the plausibility of this, consider your own computer: most
> >processes just take a few microseconds, some take a few seconds, few
> >take hours, very few take days, etc...
> In the work of the GP, that is in UD*, most processes takes billions
> of googolplex kalpa ...
Not if the GP has a resource problem, just like any programmer
of virtual realities has a resource problem.
Again: forget all the esoteric undecidability stuff, which does not
really matter here, and be pragmatic and try to look at things from the
perspective of some observer being computed as part of a virtual reality
that you programmed. The observer may get smart and correctly suspect
you have a resource problem, and can make nontrivial predictions from
there, using the Speed Prior.
> A lot of processes engaged by the UD simply does not stop.
> Some are interesting and perhaps Turing Universal like the process
> raising better and better approximations of the Mandelbrot
The Speed Prior does not say that any process stops. It just expresses
the assumption: the more something costs, the less likely it gets.
> >We do not know the Great Programmer's computer and algorithm for computing
> >universe histories and other things. Still, we know that he cannot do
> >it more efficiently than the asymptotically fastest algorithm.
> Are you really sure about that. If the GP is really *the* GP,
> existing by Church Thesis, I think it follows from Blum Speed-up
> Theorem that it has no asymptotically fastest algorithm.
Blum's theorem is completely irrelevant here. Do not
mix up undecidability results with the pragmatic
issue of predicting the future, given past observations.
I strongly encourage you to read section 6:
especially the section
Fast Computation of Finite and Infinite Strings
FAST: The Most Efficient Way of Computing Everything
to understand why there is indeed a fastest algorithm.
> But you are talking of "our little program" generated and executed
> by the UD perhaps.
> If this one has an asymptotically fastest algorithm, are you sure it
> still can emulate a UTM?
Sure. In fact, one of the asymptotically fastest UTM programs that computes
everything is exactly the almost trivial one in the 1997 book chapter:
The first program is run for one instruction every second
step, the second for one instruction every second of
the remaining steps, and so on. You do not believe it is
asymptotically fastest algorithm? Then read
Fast Computation of Finite and Infinite Strings
> I bet the "universe" is universal.
> Also: how could we know and test we are in a quick or slow version of
> that little program. Is that not (absolutely) undecidable.
No, that's exactly where the Speed Prior comes in. Accoring to this prior,
the slow programs computing the same thing contribute less to its
probability mass. It is explained in the chapter on
Speed Prior-Based Inductive Inference
> Should that program really be run? If yes, are you not presupposing
> again a physical universe?
> >The Speed Prior reflects properties of precisely this optimal algorithm.
> >It turns out that the best way of predicting future y, given past x,
> >is to minimize the (computable) Kt-complexity of (x,y). As observation
> >size grows, the predictions converge to the optimal predictions.
> As observation grows knowledge grows, and we grows making everything
> more complex that before. I guess you mean "optimal predictions" at
> some level. With comp the more knowledge grows, the more ignorance
Huh? No, please do _not_ elaborate.
> >To address Bruno Marchal's questions: Many of my duplications in parallel
> >universes with identical past x will experience different futures y.
> >None of my copies knows a priori which one it is. But each does know:
> >Those futures y with high Kt(x,y) will be highly unlikely; those with
> >low Kt(x,y) won't.
> It is indeed hard to write a little program which generate a long
> Kolmogorov-Chaitin incompressible 01 string.
> But, as I told you earlier, it is quite easy to write a little
> program which generates them all. (It is the essence of the
> everything idea imo).
As you told me earlier? Huh?
That's one of the points of the 1997 book chapter.
The thing is: when you generate them all, and assume that all are
equally likely, in the sense that all beginnings of all strings are
uniformly distributed, then you cannot explain why the regular universes
keep being regular. The random futures are then just as likely as the
So you NEED something additional to explain the ongoing regularity.
You need something like the Speed Prior, which greatly favors regular
futures over others.
> Take for exemple the iterated duplication experience/experiment.
> It can be automated by a very simple program. After 64 iterations
> there will be 1,84467.10^19 agents, and most of them will have
> an incompressible 01-history. With the comp indeterminism it is
> much more likely you will get such a really random string
> (independently of the fact that you will be unable to prove it is
> really random). Those with computable strings will be exceptional, so
> that, if those agents work together they will consider (even with
> Bayes) that the simple self-multiplying algorithm is the simplest
> and shorter explanation, for those randomeness appearances.
But don't you see? Why does a particular agent, say, yourself, with a
nonrandom past, have a nonrandom future? Why is your computer still there
after one second, although in a truly random world it would immediately
dissolve? Why do pencils keep falling down instead of up, when the
futures where they fall up are just as likely?
To repeat, the concept of the machine that computes all universe
histories is NOT by itself sufficient to explain why our future stays
regular. The additional concept of the Speed Prior explains it though.