There is another argument (also mentioned by Hal on this list some time ago)
that also suggests that the measure must decay faster than 2^(-program
length). This arguments involve the anthropic factor. The measure for an
observer to find himself in a universe is the product of an ''intrinsic''
measure of the universe times the number of times that he will be generated
in the universe. This last anthropic factor cannot be omitted, because that
would lead to the Doomsday Paradox. The anthropic factor is infinite, and
must be ''renormalized'' to a well defined measure.
Suppose that the measure is a function of program length only. For some
fixed universe with containing observers generated by a program p, consider
the set of programs p_n that generate p n times. The measure of observers in
p_n will thus be n times larger than in p. The measure of p_n must thus
decay faster than 1/n in order to end up with normalizable probability
distributions for the observers. The program length of p_n is the program
length of p plus Log_2(n), therefore the measure must decay faster than
----- Original Message -----
From: ""Hal Finney"" <[EMAIL PROTECTED]>
Sent: Tuesday, February 01, 2005 7:19 PM
Subject: RE: Belief Statements
> Bruno writes:
> > I am not sure that I understand what you do with that measure on
> > I prefer to look at infinite coin generations (that is infinitely
> > reiterated self-duplications)
> > and put measure on infinite sets of alternatives. Those infinite sets of
> > relative alternative *are* themselves" generated by simple programs
> > the UD).
> Here is how I approach it, based on Schmidhuber. Suppose we pick a model
> of computation based on a particular Universal Turing Machine (UTM).
> Imagine this model being given all possible input tapes. There are an
> uncountably infinite number of such tapes, but on any given tape only
> a finite length will actually be used (i.e. the probability of using
> an infinite number of bits of the tape is zero). This means that any
> program which runs is only a finite size, yet occurs on an infinite
> number of tapes. The fraction of the tapes which holds a given program
> is proportional to 1 over 2^(program length), if they are binary tapes.
> This is considered the measure of the given program.
> An equivalent way to think of it is to imagine the UTM being fed with
> a tape created by coin flips. Now the probability that it will run a
> given program is its measure, and again it will be proportional to 1
> over 2^(program length). I don't know whether this is what you mean
> by "infinite coin generations" but it sounds similar.
> I believe you can get the same concept of measure by using the Universal
> Dovetailer (UD) but I don't see it as necessary or particularly helpful
> to invoke this step. To me it seems simpler just to imagine all possible
> programs being run, without having to also imagine the operating system
> which runs them all on a time-sharing computer, which is what the UD
> amounts to.
> > Now we cannot know in which computational history we belong,
> > or more exactly we "belong" to an infinity of computational histories
> > (undistinguishable up to now). (It could be all the repetition of your
> > simple program)
> And by "repetition of your simple program" I think you mean the fact
> that there are an infinite number of tapes which have the same prefix
> (the same starting bits) and which all therefore run the same program,
> if it fits in that prefix. This is the basic reason why shorter programs
> have greater measure than longer ones, because there are a larger fraction
> of the tapes which have a given short prefix than a long one.
> It's also possible, as you imply, that your consciousness is instantiated
> in multiple completely different programs. For example, we live in a
> program which pretty straightforwardly implements the universe we see;
> but we also live in a program which implements a very different universe,
> in which aliens exist who run artificial life experiments, and we are
> one of those experiments. We also live in programs which just happen to
> simulate moments of our consciousness, purely through random chance.
> However, my guess is that the great majority of our measure will lie
> in just one program. I suspect that that program will be quite simple,
> and that all the other programs (such as the one with the aliens running
> alife experiments) will be considerably more complex. The simplest case
> is just what we see, and that is where most of our measure comes from.
> > But to make an infinitely correct prediction we should average on all
> > computational histories going through our states.
> Yes, I agree, although as I say my guess is that we will be "close enough"
> just by taking things as we see them, and in fact it may well be that
> the corrections from considering "bizarre" computational histories will
> be so tiny as to be unmeasurable in practice.
> > Your measure could explain why simple and short subroutine persists
> > everywhere but what we must do is to extract the "actual measure", the
> > apparently given by QM, from an internal measure on all relatively
> > consistent continuation of our (unknown!) probable computationnal state.
> > This is independent of the fact that some short programs could play the
> > some initiator of something persisting. Perhaps a quantum dovetailer ?
> > proceed by taking comp seriously this too should be justify from within.
> > Searching a measure on the computational histories instead of the
> > programs can not only be justified by thought experiments, but can
> > be defined neatly mathematically. Also a "modern" way of talking on the
> > Many Worlds is in term of relative consistent histories.
> > But the histories emerge from within. This too must be taken into
> > It can change the logic. (And actually changes it according to the
> > machine).
> I'm losing you here.
> Hal Finney