I'm sorry for this greatly delayed response. I shouldn't have sent off the
original message right before I hoped onto a plane and moved to Boston. If
anyone can't remember what this thread was about, please see
On Thu, Mar 01, 2001 at 03:14:03AM -0500, Jacques Mallah wrote:
> True, they could be more "selfish". Effectively they are playing a
> prisoner's dilemma type of game where first Bob is given a move, then Alice
> is. In this case they might both push 1, but only if they don't expect to
> interact in the future, and don't care about each other. (And also don't
> expect to gain a bad reputation.)
> Still no paradox; this case is an example of game theory. Consider the
> case where they always have the same measure and never lose memory. They
> have the choice of 1) hurt yourself a little, or 2) hurt yourself a lot but
> help the other person by an equal amount. They might both choose 1, but
> that's no paradox.
This experiment is not a "game", since the action of each participant only
affects his or her own payoff, and not the payoff of the other player.
Actually you can do this with just one participant, and maybe that will
make the paradoxical nature of anthropic reasoning clearer.
Suppose the new experiment has two rounds. In each round the participant
will be given temporary amnesia so he can't tell which round he is in. In
round one he will have low measure (1/100 of normal). In round two he will
have normal measure. He is also told:
If you push button 1, you will lose $9.
If you push button 2 and you are in round 1, you will win $10.
If you push button 2 and you are in round 2, you will lose $10.
According to anthropic reasoning, the participant when faced with the
choices should think that he is much more likely to be in round 2, and
therefore push button 1 in both rounds, but obviously he would have been
better off pushing button 2 in both rounds.