On Tue, Jun 18, 2002 at 12:16:59AM -0700, Hal Finney wrote: > Let me start with this. If U(E1) > U(E2), then would a rational person > have to pick E1 over E2? What if he were someone who were contrary? > Or someone who preferred lesser utility? I think we can rule these > cases out by properly defining utility. With the proper definition it > will always be the case that if U(E1) > U(E2), he picks E1.
Yes, this is part of the definition of utility. > Now consider a single-universe model. He can choose one of two > alternatives. In one alternative he is guaranteed to get E1, and in the > other alternative he has a 50-50 chance of getting E1 or E2. Is there > a rational way to prefer the second alternative? That is, can it be > better to have a chance of getting E2 rather than the certainty of E1? > I would like to rule this out for rational choosers, but I'm not 100% > sure. Some people seek risk, although a risk which has only a down side > still seems irrational. I agree with you here, because if he prefers the second alternative, he should not prefer E1 to E2. If faced with a choice between E1 and E2 he would do better to throw a mental coin and decide between them randomly. > There is an argument that there should be no differences, because the > information available in any sub-part of the multiverse is the same as in > the single universe case. In fact maybe we can never tell which theory > is correct, therefore the differences are entirely hypothetical. If > we accept this then what is irrational in the single universe case is > also irrational in the MWI. I disagree with you here. Although we have no direct sensory information about what happens in other branches of the multiverse, theory gives information about what happens in them, and that can be sufficient to change what we value in this branch. > Maybe you could expand on your argument about how diminishing utility > relates to evolutionary advantage across copies; I'm not sure what you > are getting at there. I see the reason higher quantities have less > marginal value to you as because of how they interact with each other > and with you; putting them all into separate universes would eliminate > the effects which I see as causing diminishing marginal value. You're right, the evolutionary advantage argument only applies to copies within a universe, not across universes (or non-interacting branches). The idea is that if you are content to have experiences similar to your copies, then the collection of your copies as a whole will contain less information (i.e. knowledge and skills) than the copies of someone who wants to have experiences different from his copies. So if your copies were to compete with his copies you would be at a disadvantage. P.S. I retract my claim that the self-sampling assumption is incorrect. I think I was just using it incorrectly. More on this in another post.