In summary, it would appear that Juergen is not disputing the use of
the UDA after all, but simply the use of the uniform measure as an
initial prior over the everything of all description. He argues that
we should use the speed prior instead. 

Another way of putting it is that he doesn't dispute the role of
indeterminism in generating observable complexity, only that it is
necessarily limited in the amount of complexity that can be generated.

None of this is in conflict with Occam's razor, or observable evidence
to date. In fact it is debatable if it is experimentally falsifiable -
no sequence can be proven to be truly random - there is always the
possibility of some pseudo random generator being found to be the
source.

                                        Cheers

[EMAIL PROTECTED] wrote:
> 
> 
> 
> Predictions & duplications
> 
> 
> > From: Marchal <[EMAIL PROTECTED]> Thu Oct  4 11:58:13 2001
> > [...]
> > You have still not explain to me how you predict your reasonably next
> > experience in the simple WM duplication. [...]
> > So, how is it that you talk like if you do have an algorithm
> > capable of telling you in advance what will be your personal
> > experience in a self-duplication experience. [...]
> > Where am I wrong? 
> > Bruno
> 
> 
> I will try again: how can we predict what's going to happen?
> 
> We need a prior probability distribution on possible histories.
> Then, once we have observed a past history, we can use Bayes' rule to
> compute the probability of various futures, given the observations. Then
> we predict the most probable future.
> 
> The algorithmic TOE paper discusses several formally describable priors
> computable in the limit. But for the moment let me focus on my favorite,
> the Speed Prior.
> 
> Assume the Great Programmer has a resource problem, just like any other
> programmer. According to the Speed Prior, the harder it is to compute
> something, the less likely it gets. More precisely, the cumulative prior
> probability of all data whose computation costs at least O(n) time is
> at most inversely proportional to n.
> 
> To evaluate the plausibility of this, consider your own computer: most
> processes just take a few microseconds, some take a few seconds, few
> take hours, very few take days, etc...
> 
> We do not know the Great Programmer's computer and algorithm for computing
> universe histories and other things. Still, we know that he cannot do
> it more efficiently than the asymptotically fastest algorithm.
> 
> The Speed Prior reflects properties of precisely this optimal algorithm.
> It turns out that the best way of predicting future y, given past x,
> is to minimize the (computable) Kt-complexity of (x,y).  As observation
> size grows, the predictions converge to the optimal predictions.
> 
> To address Bruno Marchal's questions: Many of my duplications in parallel
> universes with identical past x will experience different futures y.
> None of my copies knows a priori which one it is. But each does know:
> Those futures y with high Kt(x,y) will be highly unlikely; those with
> low Kt(x,y) won't.
> 
> Under the Speed Prior it is therefore extremely unlikely that I will
> suddenly observe truly random, inexplicable, surprising events, simply
> because true randomness is very hard to compute, and thus has very high
> Kt complexity.
> 
> Juergen Schmidhuber
> 
> http://www.idsia.ch/~juergen/
> http://www.idsia.ch/~juergen/everything/html.html
> http://www.idsia.ch/~juergen/toesv2/
> 
> 



----------------------------------------------------------------------------
Dr. Russell Standish                     Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052                         Fax   9385 6965, 0425 253119 (")
Australia                                [EMAIL PROTECTED]             
Room 2075, Red Centre                    http://parallel.hpc.unsw.edu.au/rks
            International prefix  +612, Interstate prefix 02
----------------------------------------------------------------------------

Reply via email to