I appreciate Juergen's view. In essence he is assuming a nonuniform
distribution on the ensemble of descriptions, as though the
ensemble of descriptions are produced by the FAST algorithm. This is
perhaps the same as assuming a concrete universe.

In my approach (which didn't have this technical nicety), the ensemble
of descriptions obeyed a uniform distribution.

The ultimate observed distribution is obtained by equivalencing
distibutions according to the observer's interpretation. If the
observer is a universal Turing machine, the Universal Prior results.

Now humans appear to equivalence class random strings in a most
un-Turing machine like way. (ie random (incompressible) strings
usually contain no information). This may or may not be true of
conscious beings in general.

Occam's razor still follows as a kind of theorem regardless of whether
the initial distribution were uniform or had the character of being
generated by FAST. However the FAST distribution would weight pseudo
random descriptions far higher than truly random strings, assuming the
observer had a magical way of distinguishing them, which is a
different result from assuming an initial uniform distribution.

Aside from the philosophical issues of whether one likes a concrete
universe (a la Schmidhuber) or not, there is a vague possibility of
testing the issue through the prediction Schmidhuber makes about
random sequences being found to be pseudorandom in nature.

I suppose I raise the banner for the opposing camp - uniform
distribution over the ensemble, no concrete universe and true
randomness requires for consciousness and free-will.


> Where does all the randomness come from?
> Is there an optimally efficient way of computing all the "randomness" in
> all the describable (possibly infinite) universe histories?  Yes, there
> is. There exists a machine-independent ensemble-generating algorithm
> called FAST that computes any history essentially as quickly as this
> history's fastest algorithm. Somewhat surprisingly, FAST is not slowed
> down much by the simultaneous computation of all the other histories.
> It turns out, however, that for any given history there is a speed
> limit which greatly depends on the history's degree of randomness.
> Highly random histories are extremely hard to compute, even by the optimal
> algorithm FAST. Each new bit of a truly random history requires at least
> twice the time required for computing the entire previous history.
> As history size grows, the speed of highly random histories (and most
> histories are random indeed) vanishes very quickly, no matter which
> computer we use (side note: infinite random histories would even require
> uncountable time, which does not make any sense). On the other hand,
> FAST keeps generating numerous nonrandom histories very quickly; the
> fastest ones come out at a rate of a constant number of bits per fixed
> time interval.
> Now consider an observer evolving in some universe history. He does not
> know in which, but as history size increases it becomes less and less
> likely that he is located in one of the slowly computable, highly random
> universes: after sufficient time most long histories involving him will
> be fast ones.
> Some consequences are discussed in
> http://www.idsia.ch/~juergen/toesv2/node39.html
> Juergen Schmidhuber

Dr. Russell Standish                     Director
High Performance Computing Support Unit, Phone 9385 6967                    
UNSW SYDNEY 2052                         Fax   9385 6965                    
Australia                                [EMAIL PROTECTED]             
Room 2075, Red Centre                    http://parallel.hpc.unsw.edu.au/rks

Reply via email to