I had trouble getting this posted, so I am resending this.
Fred Chen wrote:
> Juergen Schmidhuber wrote:
> >Continuations corresponding to longer algorithms also get computed, of
> >course. But they are less probable indeed. According to the universal
> >prior the probability of an algorithm is the probability of successively
> >guessing each of its bits. The longer the algorithm, the smaller its
> This insight adds some more naturalness. Maybe with a universal prior, we can get
> by with the original uniform bitstring distribution (f=constant k, i.e., each
> bitstring gets represented k times) used by Dr. Standish in his Occam's Razor
> paper and all the participants in the white rabbit discussion. There would then
> be no need for an unnatural non-uniform distribution which would have served the
> same purpose as a universal prior. I would still ask, out of curiosity, what is
> > >So what about the continuations corresponding to the longer algorithms?
> > >Those worlds still exist, don't they? If so, then for every shorter
> > >algorithm, there are continuations of longer algorithms, which were
> > >identical up to that point, but which now represent worlds which don't
> > >follow the laws of QM, but in which people neverthless still live in. You
> > >can say that the universal prior determines that I will probably follow a
> > >short algorithm, but what can you possibly say about all those people in all
> > >those worlds who didn't follow the shortest algorithm? Unless there are
> > >less worlds like theirs than like ours, I just can't see how you can dismiss
> > >their worlds as less probable.
> I suppose as long as we exist, it doesn't matter how probable we are, but it is
> interesting to speculate on how probable we are. With the self-sampling
> assumption (from the Occam's Razor paper), we assume we are pretty probable.