I comment again a relatively old post by Juergen Schmidhuber.

>Juergen: The thing is: when you generate them all, and assume that all are
>equally likely, in the sense that all beginnings of all strings are
>uniformly distributed, then you cannot explain why the regular universes 
>keep being regular.  The random futures are then just as likely as the 
>nonrandom ones.
>So you NEED something additional to explain the ongoing regularity.
>You need something like the Speed Prior, which greatly favors regular 
>futures over others.
>> Bruno: Take for exemple the iterated duplication experience/experiment. 
>> It can be automated by a very simple program. After 64 iterations
>> there will be 1,84467.10^19 agents, and most of them will have
>> an incompressible 01-history. With the comp indeterminism it is
>> much more likely you will get such a really random string 
>> (independently of the fact that you will be unable to prove it is
>> really random). Those with computable strings will be exceptional, so
>> that, if those agents work together they will consider (even with
>> Bayes) that the simple self-multiplying algorithm is the simplest
>> and shorter explanation, for those randomeness appearances.
>Juergen: But don't you see? Why does a particular agent, say, 
>yourself, with a
>nonrandom past, have a nonrandom future? 

But we have random future. Just send a sequence of particles in the state
1/sqrt(2)( up + down ) through an "analyser". Write 0 and 1 each time
a particular particle go through. Both QM+comp (Everett) or comp
alone (I will not elaborate !) explain this first person (plural) point
of view randomization by a relative
As I said the simple self-multiplying algorithm is the simplest and 
explanation, for those randomness appearances.

>Why is your computer still there
>after one second, although in a truly random world it would immediately

No one says reality is *only* a truly random
realm, especially when third person reality is given by UD*, the trace
of the Universal Dovetailer. Arithmetically it is just the set of true
\sigma_1 sentences. That's our (third person) atomic truth. Of course
those truth are "UD" provable.

>Why do pencils keep falling down instead of up, when the
>futures where they fall up are just as likely?

Because those sets of machine accessible arithmetical truth, as viewed
by the machines themselves (this introduce the modalities) is
highly structured.

Indeed the UD has this nasty habit of dovetailing on the reals, so that 
randomisation is at work. But some equilibrium between randomization
and local lawfullness has to be made in the limit (where first person 
point of  views are eventually defined in Plato Heaven). 
Our own stability could
only rely on the randomization of the details of our stories,
randomisation which we should necessarily observe if we look at ourself
*below* our (apparently common) substitution level. Although
this gives a sort of necessary prior (need of quantisation of the 
stories", transforming H into exp(-iH)), I prefer, following my naive idea
to interview directly the sound Universal Machine.

I define "probability of p = 1" in (Peano) Arithmetic by Bew('p) & 
In this way p is true in all consistent extensions (Bew ('p)) and there
is (the bet part) at least some consistent extension (Con('p)).
I restrict the arithmetical interpretation p to the Arithmetical Sigma_1
sentences (the aritmetical UD). This gives an arithmetical
quantization of p by []<>p, (with []p = Bew('p) & Con('p)).   
It indeed obeys a sort of quantum logic.
As always    Con 'p  ( <>p )      =      -Bew('-p)     ( -[]-p ).

But I fear you don't believe in any form of "strict" indeterminism, 
comp nor QM, isn't it?


Reply via email to