> > In (3) the universe doesn't have a high aIgorithmic complexity. > > I should have said that in (3) our decisions don't have any consequences, so > we disregard them even if we do care what happens in them. The end result is > the same: I'll act as if I only live in (2).

## Advertising

In the (3) I gave, you're indexed so that the thermal fluctuation doesn't dissolve until November 1, so your actions still have consequences. > I will throw a fair coin. If the coin lands heads up, you will be > instantaneously vaporized. If it lands tails up, I will exactly double your > measure (say by creating a copy of your brain and continuously keeping it > synchronized). This is one of a larger class of problems related to volition, and the coupling of my qualia to an external reality, that I don't currently have an answer for. I want to live on in the current Universe, I don't to die and have a duplicate of myself created in a different Universe. I want to eat a real ice cream cone, I don't want you to stimulate my neurons to make me imagine I'm eating an ice cream cone. I would argue that a world where I can interact with real people is, in some sense, better than a world where I interact with imaginary people who I believe are real. > Well, let's consider an agent who happens to have preferences of a special > form. It so happens that for him, the multiverse can be divided into several > "regions", the descriptions of which will be denoted S_1, S_2, S_3, etc., > such that S_1 U S_2 U S_3 ... = S and his preferences over the whole > multiverse can be expressed as a linear combination of his preferences over > those "regions". That means, there exists functions P(.) and U(.) such that > he prefers the multiverse S to the multiverse T if and only if > > P(S_1)*U(S_1) + P(S_2)*U(S_2) + P(S_3)*U(S_3) + ... > > > P(T_1)*U(T_1) + P(T_2)*U(T_2) + P(T_3)*U(T_3) ... > > I haven't worked out all of the details of this formalism, but I hope you > can see where I'm going with this... You have a general model, which can encompass classical decision theory, but can also encompass other models as well. It's not immediately clear to me what benefit, if any, we get from such a general model. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to [EMAIL PROTECTED] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list?hl=en -~----------~----~----~----~------~----~------~--~---