On Wed, Mar 21, 2001 at 09:39:10PM -0500, Jacques Mallah wrote:
> First, it's nice to see that you accept my resolution of the "paradox".
> But I have a hard time believing that your point was, in fact, the
> above. You brought forth an attack on anthropic reasoning, calling it
> paradoxical, and I parried it. Now you claim that you were only pointing
> out that anthropic reasoning is just an innocent bystander? Of course it's
> just a friendly training exercise, but you do seem to be pulling a switch
Sorry, I took the opportunity to address a new point, and negligected to
answer your parry directly. So let's go back to that.
> He thinks he is only 1/101 likely to be in round 1. However, he also
> knows that if he _is_ in round 1, the effect of his actions will be
> magnified 100-fold. Thus he will push button 2.
> You might see this better by thinking of measure as the # of copies of
> him in operation.
> If he is in round 1, there is 1 copy operating. The decision that copy
> makes will affect the fate of all 100 copies of him.
> If he is in round 2, all 100 copies are running. Thus any one copy of
> him will effectively only decide its own fate and not that of its 99
You'll have to define what "effectively decide" means and how to apply
that concept generally. (Have you introduced it before? I think this is
the first time I've seen it.) Suppose in round 2 he gets the $-9 payoff if
any of the copies decide to push button 1. Intuitvely, each copy affects
the fate of every other copy. How do you reach the conclusion that each
copy effectively affects only its own fate? Could you formalize the
argument for me please?
> >I suggest that he instead think
> >"I'm in both round 1 and round 2, and I should give equal
> >consideration to the effects of my decision in both rounds."
> I assume you mean he should think "I am, or was, in round 1, and I am,
> or will be, in round 2". There is no need for him to think that, and it's
> not true. Only one of the "brothers" was in round 1.
No, I meant he should think himself as being in both round 1 and round 2
"simultaneously", and making both decisions at once.
> First, anyone whose utility function does not depend on measure is
> definately insane in my book.
> One good utility function could have the form
> U = [sum_(thoughts) f(thought) M(thought)] + V
> where f is some function, M is the (unnormalized!!) measure, and V is to
> take into account other stuff he may care about. V = 0 is perhaps the
> wisest choice. This does not take indexical info into account, so you will
> probably not like it.
I think anyone whose utility function does have that form is insane. He's
going to spend most of his resources running repeated simulations of
himself having the thought that he values most.