Wei writes: > I think you probably misunderstood what you read. What's true is that > universal priors based on different Turing machines are close to each > other, up to a multiplicative factor that depends on the pair of universal > priors being compared. But this multiplicative factor is not necessarily > insignificant in terms of practical consequences, and there are other > candidiate measures proposed by Juergen that seem about as attractive as > universal priors (i.e. the less dominant speed prior, and the more > dominant one based on GTMs which I'm not sure has a name).

## Advertising

Yes, I went back and checked (a paper by Li and Vitanyi and someone else) and you are right, the priors are only the same up to a constant factor. I agree that this is too weak a sense in which the priors are "close" to each other to be useful. Also these kinds of comparisons are only really meaningful in an asymptotic sense as we go to infinite length strings. For finite strings a constant factor says essentially nothing about how close two distributions are to each other. I'm not convinced about the models of computation involving GTMs and such in Juergen Schmidhuber's paper. Basically these kinds of TMs can change their mind about the output, and the machine doesn't know when it is through changing its mind. So there is never any time you can point to the output or even a prefix and say that part is done. It is questionable to me whether this ought to count as computation. I will write some more about his paper tomorrow, I hope. > One thing you do give up when you abandon the concept of an objective > measure is an explanation of why you are who(/when/where) you are. But I > can't shake the feeling that perhaps the question doesn't really make > sense, because how could you not be who you are? However, if you don't > want to give this up, you can keep the notion of an objective measure just > for the purpose of having this explanation, but adopt an arbitrary > subjective measure for making decision. In other words, even if you think > there is an objective measure, you don't have to care about each universe > in proportion to its measure. I wonder if a better term than "objective measure" is "probability". That carries the connotation that it represents the likelihood that something happens. Then you could have an objective probability which told how likely each universe was (its chance of being selected at random from the multiverse), and a subjective measure that told how much you cared about universes. > Now suppose you could manipulate the beliefs of your future selfs and not > have them remember the manipulations (e.g. like the protagonist in the > movie Momento). What would you want your future selfs to accept as the > objective measure? Well it would be the same as your subjective measure, > because otherwise you'd be in a situation where the future selfs that you > care a lot about don't have a good explanation for why they are where they > are, while some of the ones you don't care much about do have a good > explanation. And what's the downside to your future selfs believing in > the "incorrect" objective measure? There doesn't seem to be one. Maybe you should multiply the subjective measure times the objective measure. Then the subjective measure can play the role of utility in game theory, and the objective measure is as suggested above the probability. This brings us back onto familiar territory. Rather than dealing with this abstract "measure" which is just a number that is associated with a universe, we have probability and value, the traditional basis for making decisions. Hal