RE: Observation selection effects

2004-10-14 Thread Brent Meeker


>-Original Message-
>From: Stathis Papaioannou [mailto:[EMAIL PROTECTED]
>Sent: Thursday, October 14, 2004 7:36 AM
>To: [EMAIL PROTECTED]; [EMAIL PROTECTED];
>[EMAIL PROTECTED]
>Subject: RE: Observation selection effects
>
>
>
>Brent Meeker and Jesse Mazer and others wrote:
>
>Well, lots and lots of complex mathematical argument on
>the two envelope
>problem...
>
>But no-one has yet pointed out a flaw in my rather
>simplistic analysis:
>
>(1) One envelope contains x currency units, so the other
>contains 2x
>currency units;
>
>(2) If you stop at the first envelope you choose,
>expected gain is: 0.5*x +
>0.5*2x = 1.5x;
>
>(3) If you open the first envelope then switch to the
>second, your expected
>gain is: 0.5*2x + 0.5*x = 1.5x - as above, just in a
>different order,
>obviously;
>
>(4) If, in a variation, the millionaire flips a coin to
>give you double or
>half the amount in the first envelope if you switch
>envelopes, expected gain
>is: 0.25*2x + 0.25*0.5x + 0.25*x + 0.25*4x = 1.875x.
>
>In the latter situation you are obviously better off
>switching, but it is a
>mistake to assume that (4) applies in the original
>problem, (3) - hence, no
>paradox.
>
>Is the above wrong, or is it just so obvious that it
>isn't worth discussing?
>(I'm willing to accept either answer).
>
>Stathis Papaioannou

It's not wrong - I just don't think it addresses the paradox.  To
resolve the paradox you must explain why it is wrong to reason:

I've opened one envelope and I see amount m.  If I keep it my gain
is m.  If I switch my expected gain is 0.5*m/2 + 0.5*2m = 1.25m,
therefore I should switch.

To say that in another, similiar game (4) this reasoning is
correct, doesn't explain why it is wrong in the given case.

Your (2) and (3) aren't to the point because they don't recognize
that after opening one envelope you have some information that
seems to change the expected value.

In my analysis, it is apparent that the trick of showing the
expected value doesn't change depends on the feature of the
problem statement that the distribution of the amount of money is
scale free - i.e. all amounts are equally likely.  If you accept
this, then a Bayesian analysis of your rational belief shows that
the expected value doesn't change when you open the envelope and
see amount m.  Intuitively, observing a value from a distribution
that is flat from zero to infinity *doesn't* give you any
information.  Solving the paradox is to show explicitly why this
is so.

As Jesse and others have pointed out this scale-free (all amounts
are equally likely) aspect of the problem as stated is unrealistic
and in any real situation your prior estimate of the scale of the
amounts will cause you to modify your expected value after you see
the amount in first envelope.  This modification may prompt you to
switch or not - but it's a different problem.

Brent Meeker



RE: Observation selection effects

2004-10-14 Thread Stathis Papaioannou
Brent Meeker and Jesse Mazer and others wrote:
Well, lots and lots of complex mathematical argument on the two envelope 
problem...

But no-one has yet pointed out a flaw in my rather simplistic analysis:
(1) One envelope contains x currency units, so the other contains 2x 
currency units;

(2) If you stop at the first envelope you choose, expected gain is: 0.5*x + 
0.5*2x = 1.5x;

(3) If you open the first envelope then switch to the second, your expected 
gain is: 0.5*2x + 0.5*x = 1.5x - as above, just in a different order, 
obviously;

(4) If, in a variation, the millionaire flips a coin to give you double or 
half the amount in the first envelope if you switch envelopes, expected gain 
is: 0.25*2x + 0.25*0.5x + 0.25*x + 0.25*4x = 1.875x.

In the latter situation you are obviously better off switching, but it is a 
mistake to assume that (4) applies in the original problem, (3) - hence, no 
paradox.

Is the above wrong, or is it just so obvious that it isn't worth discussing? 
(I'm willing to accept either answer).

Stathis Papaioannou
_
Searching for that dream home? Try   http://ninemsn.realestate.com.au  for 
all your property needs.



RE: Observation selection effects

2004-10-13 Thread Brent Meeker


>-Original Message-
>From: Jesse Mazer [mailto:[EMAIL PROTECTED]
>Sent: Tuesday, October 05, 2004 11:01 PM
>To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>Subject: RE: Observation selection effects
>
>
>>>-Original Message-
>>>From: Jesse Mazer [mailto:[EMAIL PROTECTED]
>>>Sent: Tuesday, October 05, 2004 8:45 PM
>>>To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>>>Subject: RE: Observation selection effects
>>>
>>>If the range of the smaller amount is infinite,
>>>as in my P(x)=1/e^x
>>>example, then it would no longer make sense to say that
>>>the range of the
>>>larger amount is r times larger.
>>
>>Sure it does; r*inf=inf.  P(s)=exp(-x) -> P(l)=exp(-x/r)
>
>But it would make just as much sense to say that the
>second range is 3r
>times wider, since by the same logic 3r*inf=inf. In
>other words, this step
>in your proof doesn't make sense:
>
>>In other words, the range of possible
>>amounts is such that the larger and smaller amount do
>not overlap.
>>Then, for any interval of the range (x,x+dx) for the smaller
>>amount with probability p, there is a corresponding
>interval (r*x,
>>r*x+r*dx) with probability p for the larger amount.  Since the
>>latter interval is longer by a factor of r
>>
>> P(l|m)/P(s|m) = r ,
>>
>>In other words, no matter what m is, it is r-times more
>likely to
>>fall in a large-amount interval than in a small-amount interval.
>
>As for your statement that "P(s)=exp(-x) ->
>P(l)=exp(-x/r)", that can't be
>true. It doesn't make sense that the value of the second
>probability
>distribution at x would be exp(-x/r), since the range of
>possible values for
>the amount in that envelope is 0 to infinity, but the
>integral of exp(-x/r)
>from 0 to infinity is not equal to 1, so that's not a
>valid probability
>distribution.
>
>Also, now that I think more about it I'm not even sure
>the step in your
>proof I quoted above actually makes sense even in the
>case of a probability
>distribution with finite range. What exactly does the equation
>"P(l|m)/P(s|m) = r" mean, anyway?

For any give amount of money, m, found in the first envelope, it
is more probable by a factor of r that it came from the Larger
envelope - where "probable" means degree of rational belief, not
fraction in a statistical ensemble.

>It can't mean that if
>I choose an envelope
>at random, before I even open it I can say that the
>amount m inside is r
>times more likely to have been picked from the larger
>distribution, since I
>know there is a 50% chance I will pick the envelope
>whose amount was picked
>from the larger distribution. Is it supposed to mean
>that if we let the
>number of trials go to infinity and then look at the
>subset of trials where
>the envelope I opened contained m dollars, it is r times
>more likely that
>the envelope was picked from the larger distribution on
>any given trial?
>This can't be true for every specific m--for example, if
>the smaller
>distribution had a range of 0 to 100 and the larger had
>a range of 0 to 200,

But whole point is that there is no "specific m" from which you
can reason.

>if I set m=150, then in every single trial where I found
>150 dollars in the
>envelope it must have been selected from the larger
>distribution. You could
>do a weighted average over all possible values of m,
>like "integral over all
>possible values of m of P('I found m dollars in the envelope I
>selected')*P('the envelope I selected had an amount
>taken from the smaller
>distribution' | 'I found m dollars in the envelope I
>selected'), which you
>could write as "integral over m of P(m)*P(s|m)", but I
>don't think it would
>be true that the ratio "integral over m of
>P(m)*P(l|m)"/"integral over m of
>P(m)*P(s|m)" would be equal to r, in fact I think both
>integrals would
>always come out to 1/2 so the ratio would always be
>1...and even if I'm
>wrong, replacing P(l|m)/P(s|m) with this ratio of
>integrals would mess up
>the rest of your proof.
>
>Jesse

No, it doesn't depend on assuming a flat distribution for the
money, only for our knowledge (or on our acceptance of problem as
stated).  Here's the more explicit (but less intuitive) proof - I
hope the formatting doesn't get chopped up too much by your mail
reader.

Without loss of generality, we can describe our prior density
functions for the amounts in the two envelopes in terms of a
density function, fo(x), the ratio r of the larger amount to the
smaller, a

Re: observation selection effects

2004-10-11 Thread John M
Thanks, Kory, that takes care of my confusion.
The same to Jesse's post.
John Mikes
- Original Message -
From: "Kory Heath" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, October 10, 2004 7:17 PM
Subject: Re: observation selection effects


> At 02:57 PM 10/10/2004, John M wrote:
> >Then it occurred to me that you made the same
> >assumption as in my post shortly prior to yours:
> >a priviledge of "ME" to switch, barring the others.
>
> I think this pinpoints one of the confusions that's muddying up this
> discussion. Under the Flip-Flop rules as they were presented, the Winning
> Flip is determined before people switch, and the Winning Flip doesn't
> change based on how people switch. In that scenario, my table is correct,
> and there is no paradox.
>
> We can also consider the variant in which the Winning Flip is determined
> after people decide whether or not to switch. But that game is
functionally
> identical to the game where there is no coin-toss at all - everyone just
> freely chooses Heads or Tails, then the Winning Flip is determined and the
> winners are paid. Flipping a coin, looking at it, and then deciding
whether
> or not to switch it is identical to simply picking heads or tails! The
> coin-flips only matter in the first variant, where they determine the
> Winning Flip *before* people make their choices.
>
> In this variant, it doesn't matter whether you switch or not (i.e. whether
> you choose heads or tails) - you are more likely to lose than win. We can
> use the same 3-player table we've been discussing to see that there are
> eight possible outcomes, and you only win in two of them. Once again,
> there's no paradox, although you might *feel* like there is one. You might
> reason that the Winning Flip is equally likely to be heads or tails, so no
> matter which one you pick, your odds of winning will be 50/50. What's
> missing from this logic is the recognition that no matter what you pick,
> your choice will automatically decrease the chances of that side being in
> the minority.
>
> -- Kory
>




Re: observation selection effects

2004-10-10 Thread Kory Heath
At 07:17 PM 10/10/2004, Kory Heath wrote:
We can also consider the variant in which the Winning Flip is determined 
after people decide whether or not to switch.
In a follow-up to my own post, I should point out that your winning chances 
in this game depend on how your opponents are playing. If all of your 
opponents are playing randomly, then you have a negative expectation no 
matter what you do. If your opponents are not playing randomly, then you 
may be able to exploit patterns in their play to generate a positive 
expectation.

-- Kory


re: observation selection effects

2004-10-10 Thread Stathis Papaioannou
You're right, as was discussed last week. It seems I clicked on the wrong 
thing in my email program and have re-sent an old post. My apologies for 
taking up the bandwidth!

--Stathis
From: Kory Heath <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Subject: re: observation selection effects
Date: Sat, 09 Oct 2004 18:17:50 -0400
At 10:35 AM 10/9/2004, Stathis Papaioannou wrote:
From the point of view of typical player, it would seem that there is not: 
the Winning Flip is as likely to be heads as tails, and if he played the 
game repeatedly over time, he should expect to break even, whether he 
switches in the final step or not.
That's not correct. While it's true that the Winning Flip is as likely to 
be heads as tails, it's not true that I'm as likely to be in the winning 
group as the loosing group. Look at the case when there are only three 
players. There are eight possible outcomes:

Me: H  Player 1: H  Player 2: H - WF: T
Me: H  Player 1: H  Player 2: T - WF: T
Me: H  Player 1: T  Player 2: H - WF: T
Me: H  Player 1: T  Player 2: T - WF: H
Me: T  Player 1: H  Player 2: H - WF: T
Me: T  Player 1: H  Player 2: T - WF: H
Me: T  Player 1: T  Player 2: H - WF: H
Me: T  Player 1: T  Player 2: T - WF: H
I am in the winning group in only two out of these eight cases. So my 
chances of winning if I don't switch are 1/4, and my chances of winning if 
I do switch are 3/4. There's no paradox here.

-- Kory
_
Enter our Mobile Babe Search and win big!  http://ninemsn.com.au/babesearch


Re: observation selection effects

2004-10-10 Thread Kory Heath
At 04:47 PM 10/10/2004, Jesse Mazer wrote:
If I get heads, I know the only possible way for the winning flip to be 
heads would be if both the other players got tails, whereas the winning 
flip will be tails if the other two got heads *or* if one got heads and 
the other got tails.
I agree with this, but I want to add a subtle point: it's correct to switch 
*even if I haven't looked at my own coin*. That's because, despite the fact 
that I don't know whether my own coin is heads or tails, I know that, 
whichever it is, it's more likely to be in the majority than the minority.

That's not to say that *nobody* needs to look at my coin. In order to 
determine whether or not my choice to "switch" puts me in the heads or the 
tails group, *someone's* going to have to look at my coin. But the rules of 
the game allow me to pass off this act of looking to someone else - they 
essentially allow me to tell the casino worker "hey, take a look at my 
coin, will ya, and assign me to the opposite". The important point is that 
I can safely pass off this instruction to switch without even knowing the 
result of my coin-flip, because I know that, whatever my coin is, it's more 
likely to be in the majority group.

If we change the rules of the game slightly, and say that, instead of 
choosing whether or not to "switch", you have to actually choose "heads" or 
"tails", then, of course, you yourself do need to see the result of your 
own coin-flip.

-- Kory


Re: observation selection effects

2004-10-10 Thread Kory Heath
At 02:57 PM 10/10/2004, John M wrote:
Then it occurred to me that you made the same
assumption as in my post shortly prior to yours:
a priviledge of "ME" to switch, barring the others.
I think this pinpoints one of the confusions that's muddying up this 
discussion. Under the Flip-Flop rules as they were presented, the Winning 
Flip is determined before people switch, and the Winning Flip doesn't 
change based on how people switch. In that scenario, my table is correct, 
and there is no paradox.

We can also consider the variant in which the Winning Flip is determined 
after people decide whether or not to switch. But that game is functionally 
identical to the game where there is no coin-toss at all - everyone just 
freely chooses Heads or Tails, then the Winning Flip is determined and the 
winners are paid. Flipping a coin, looking at it, and then deciding whether 
or not to switch it is identical to simply picking heads or tails! The 
coin-flips only matter in the first variant, where they determine the 
Winning Flip *before* people make their choices.

In this variant, it doesn't matter whether you switch or not (i.e. whether 
you choose heads or tails) - you are more likely to lose than win. We can 
use the same 3-player table we've been discussing to see that there are 
eight possible outcomes, and you only win in two of them. Once again, 
there's no paradox, although you might *feel* like there is one. You might 
reason that the Winning Flip is equally likely to be heads or tails, so no 
matter which one you pick, your odds of winning will be 50/50. What's 
missing from this logic is the recognition that no matter what you pick, 
your choice will automatically decrease the chances of that side being in 
the minority.

-- Kory


Re: observation selection effects

2004-10-10 Thread Jesse Mazer
John M wrote:
Dear Kory,
your argument pushed me off balance. I checked your table and found
it absolutely true. Then it occurred to me that you made the same
assumption as in my post shortly prior to yours:
a priviledge of "ME" to switch, barring the others.
I continued your table to situations when the #2 player is switching and
then when #3 is doing it - all the way to all 3 of us did switch and found
that such extension of the case returns the so called 'probability' to the
uncalculable (especially if there are more than 3 players) like a many -
many body problem.
Cheers
John
Why would it matter if the other players switch? Based on the description of 
the game at http://tinyurl.com/4oses I thought the "winning flip" was 
determined solely by what each player's original flip was, not what their 
final bet was. In other words, if two players get heads and the other gets a 
tails, then the winning flip is automatically tails, even if the two players 
who got heads switch their bet to tails.

Assuming this is true, it's pretty easy to see why it's better to 
switch--although it makes sense to say the winning flip is equally likely to 
be heads or tails *before* anyone flips, seeing the result of your own 
coinflip gives you additional information about what the winning flip is 
likely to be. If I get heads, I know the only possible way for the winning 
flip to be heads would be if both the other players got tails, whereas the 
winning flip will be tails if the other two got heads *or* if one got heads 
and the other got tails.

Jesse



re: observation selection effects

2004-10-09 Thread Kory Heath
At 10:35 AM 10/9/2004, Stathis Papaioannou wrote:
From the point of view of typical player, it would seem that there is 
not: the Winning Flip is as likely to be heads as tails, and if he played 
the game repeatedly over time, he should expect to break even, whether he 
switches in the final step or not.
That's not correct. While it's true that the Winning Flip is as likely to 
be heads as tails, it's not true that I'm as likely to be in the winning 
group as the loosing group. Look at the case when there are only three 
players. There are eight possible outcomes:

Me: H  Player 1: H  Player 2: H - WF: T
Me: H  Player 1: H  Player 2: T - WF: T
Me: H  Player 1: T  Player 2: H - WF: T
Me: H  Player 1: T  Player 2: T - WF: H
Me: T  Player 1: H  Player 2: H - WF: T
Me: T  Player 1: H  Player 2: T - WF: H
Me: T  Player 1: T  Player 2: H - WF: H
Me: T  Player 1: T  Player 2: T - WF: H
I am in the winning group in only two out of these eight cases. So my 
chances of winning if I don't switch are 1/4, and my chances of winning if 
I do switch are 3/4. There's no paradox here.

-- Kory


Re: observation selection effects

2004-10-09 Thread John M



Stathis, in this new FLip-Flop I see some slight 
merit beyond the symmetry of switching from one unknown to another 
unknown:
If I got heads, I can THINK of the majority 
getting heads. (No justifiacation, however, but a slight idea that I am the 
'average guy',
not the exceptional minority). 
In that case it is allowable to switch, to get 
into the less likely group, the paying minority. 
I would not include any calculations: the 
conditions are not quantizable in my opinion. In the moment you start 
quantizing, the situation reverts into a perfect symmetry with the 
opposite.UNLESS there are hidden parameters pointing to a NON-50-50 situation. 
The 'insufficient number of sampling' is unsatisfactory it works both 
ways.
 
John Mikes

  - Original Message - 
  From: 
  Stathis Papaioannou 
  To: [EMAIL PROTECTED] 
  Sent: Saturday, October 09, 2004 10:35 
  AM
  Subject: re: observation selection 
  effects
  
  
  Here is a similar paradox to the 
  traffic lane example:
   
  In the new casino game called 
  Flip-Flop, an odd number of players pay $1 each to gather in individual 
  cubicles and flip a coin (so no player can see what another player is doing). 
  The game organisers tally up the results, and the 
  result which is in the minority is said to be the Winning Flip, while the 
  minority result is said to be the Losing Flip. For example, if there are 101 
  players and of these 53 flip heads while 48 flip tails, tails is the Winning 
  Flip and heads is the Losing Flip. Before the result of the tally is 
  announced, each player must commit to either keep the result of their original 
  coin flip, whether heads or tails, or switch to the opposite result. The 
  casino then announces what the Winning Flip was, and players whose final 
  result (however it was obtained) corresponds with this are paid $2, while the 
  rest get nothing.
   
  The question now: is there 
  anything to be gained by switching at the last step of this game? From the 
  point of view of typical player, it would seem that there is not: the Winning 
  Flip is as likely to be heads as tails, and if he played the game repeatedly 
  over time, he should expect to break even, whether he switches in the final 
  step or not. On the other hand, it seems clear that if nobody switches, the 
  casino is ahead, while if everbody switches, the players are ahead; so 
  switching would seem to be a winning strategy for the players. This latter 
  result is not due to any cooperation effect, as only those players who switch 
  get the improved (on average) outcome.
   
  Stathis 
  Papaioannou


re: observation selection effects

2004-10-09 Thread Stathis Papaioannou








Here is a similar paradox to the traffic lane example:

 

In the new casino game called Flip-Flop, an odd number of
players pay $1 each to gather in individual cubicles and flip a coin (so no
player can see what another player is doing). The game organisers tally up the results, and the result which is in the minority is said to
be the Winning Flip, while the minority result is said to be the Losing Flip.
For example, if there are 101 players and of these 53 flip heads while 48 flip
tails, tails is the Winning Flip and heads is the Losing Flip. Before the
result of the tally is announced, each player must commit to either keep the
result of their original coin flip, whether heads or tails, or switch to the
opposite result. The casino then announces what the Winning Flip was, and
players whose final result (however it was obtained) corresponds with this are
paid $2, while the rest get nothing.

 

The question now: is there anything to be gained by
switching at the last step of this game? From the point of view of typical
player, it would seem that there is not: the Winning Flip is as likely to be
heads as tails, and if he played the game repeatedly over time, he should
expect to break even, whether he switches in the final step or not. On the
other hand, it seems clear that if nobody switches, the casino is ahead, while
if everbody switches, the players are ahead; so switching would seem to be a
winning strategy for the players. This latter result is not due to any
cooperation effect, as only those players who switch get the improved (on
average) outcome.

 

Stathis Papaioannou








Re: Observation selection effects

2004-10-08 Thread "Hal Finney"
Stathis Papaioannou writes:
> Hal Finney writes:
> >Not to detract from your main point, but I want to point out that
> >sometimes there is ambiguity about how to count worlds, for example in
> >the many worlds interpretation of QM.  There are many examples of QM
> >based world-counting which seem to show that in most worlds, probability
> >theory should fail.
>
> I'm not sure what examples you have in mind here,

The specific kind of example goes like this.  Suppose you take a
vertically polarized photon and pass it through a polarizer that is
tilted slightly from the vertical.  Quantum mechanics predicts that
there is a high chance, say 99%, that the photon will pass through,
and a low chance, 1%, that it will not make it and be absorbed.

Now, the many worlds interpretation can be read to say that the universe
splits into two when this experiment occurs.  There are two possible
outcomes: either it passes through or it is absorbed.  So there are two
universes corresponding to the two results.

However, the universes are not of equal probability, according to QM.
One should be observed 99% of the time and the other only 1% of the
time.

The discrepancy gets worse if we imagine repeating the experiment multiple
times.  Each time the multiverse splits again in two.  If we did it, say,
20 times, there would be 2^20 or about 1 million universes.  In only
one of those universes did the photon take the 99% chance each time,
yet that is the expected result.  By a counting argument, the chance
of getting that result is only one in a million since only one world
out of a million sees it.  This is the apparent contradiction between
the probability predictions of orthodox quantum mechanics and the MWI,
assuming that we count worlds in this way.


> but this is actually the 
> general point I was trying to make: probability theory doesn't seem to work 
> the same way in a many worlds cosmology, due to complications such as 
> observers multiplying and then not being able to access the entire 
> probability space after the event of interest.
>
> Consider these three examples:
>
> (A) In a single world cosmology, I claim that using my magic powers, I have 
> bestowed on you, and you alone, the ability to pick the winning numbers in 
> this week's lottery. If you then buy a lottery ticket and win the first 
> prize, I think it would be reasonable to concede that there was probably 
> some substance to my claim (if not magic powers, then at least an effective 
> way of cheating).
>
> (B) In a single world cosmology, I announce that using my magic powers, I 
> have bestowed on some lucky gambler the ability to pick the winning numbers 
> in this week's lottery. Now, someone does in fact win the first prize this 
> week, but that is not surprising, because there is almost always at least 
> one winner each week. I cannot reasonably claim to have helped the winner 
> unless I had somehow tagged him or otherwise uniquely identified him before 
> the lottery was drawn, as in (A).
>
> (C) In a many worlds cosmology, I seek you out as in (A) and make the same 
> claim about bestowing on you the ability to pick the winning numbers in this 
> week's lottery. You buy a ticket, and win first prize. Should you thank me 
> for helping you win, as in (A)? In general, no; this situation is actually 
> more closely analogous to (B) than to (A). For it is certain that at least 
> one future version of you will win, just as it is very likely that at least 
> one person will win in the single world example. I can only claim that I 
> helped you win if I somehow identified which version in which world is going 
> to win before the lottery is drawn, and that is impossible.

I'm afraid I don't agree with the conclusion in (C).  I definitely should
thank you.  To see this, let's make my thanks a little more sincere,
in the form of a payment.  Suppose I agree in advance to pay you $1000
if you succeed in helping me win the lottery.  I say that is a wise
decision on my part.  It doesn't cost me anything if you don't help,
and if you do have some way of rigging the lottery then I can easily
afford to pay you this modest sum out of my winnings.

But I think your reasoning suggests that it is unwise, since I will win
anyway, so why should I pay anything to you?  I don't need to thank you
in this way.  Do you agree that this follows from your reasoning?

Hal Finney



Re: Observation selection effects

2004-10-08 Thread Stathis Papaioannou
Hal Finney writes:
Not to detract from your main point, but I want to point out that
sometimes there is ambiguity about how to count worlds, for example in
the many worlds interpretation of QM.  There are many examples of QM
based world-counting which seem to show that in most worlds, probability
theory should fail.
I'm not sure what examples you have in mind here, but this is actually the 
general point I was trying to make: probability theory doesn't seem to work 
the same way in a many worlds cosmology, due to complications such as 
observers multiplying and then not being able to access the entire 
probability space after the event of interest.

Consider these three examples:
(A) In a single world cosmology, I claim that using my magic powers, I have 
bestowed on you, and you alone, the ability to pick the winning numbers in 
this week's lottery. If you then buy a lottery ticket and win the first 
prize, I think it would be reasonable to concede that there was probably 
some substance to my claim (if not magic powers, then at least an effective 
way of cheating).

(B) In a single world cosmology, I announce that using my magic powers, I 
have bestowed on some lucky gambler the ability to pick the winning numbers 
in this week's lottery. Now, someone does in fact win the first prize this 
week, but that is not surprising, because there is almost always at least 
one winner each week. I cannot reasonably claim to have helped the winner 
unless I had somehow tagged him or otherwise uniquely identified him before 
the lottery was drawn, as in (A).

(C) In a many worlds cosmology, I seek you out as in (A) and make the same 
claim about bestowing on you the ability to pick the winning numbers in this 
week's lottery. You buy a ticket, and win first prize. Should you thank me 
for helping you win, as in (A)? In general, no; this situation is actually 
more closely analogous to (B) than to (A). For it is certain that at least 
one future version of you will win, just as it is very likely that at least 
one person will win in the single world example. I can only claim that I 
helped you win if I somehow identified which version in which world is going 
to win before the lottery is drawn, and that is impossible.

Stathis Papaioannou
_
Check out Election 2004 for up-to-date election news, plus voter tools and 
more! http://special.msn.com/msn/election2004.armx



Re: Observation selection effects

2004-10-07 Thread "Hal Finney"
Stathis Papaioannou writes:
> Suppose that according to X-Theory, in the next minute the world will split 
> into one million different versions, of which one version will be the same 
> sort of orderly world we are used to, while the rest will be worlds in which 
> it will be immediately obvious to us that very strange things are happening, 
> eg. dragons materialising out of thin air, furniture levitating, the planet 
> Jupiter hurling itself into the sun, etc. I think it is reasonable to expect 
> that if X-Theory is correct, we will very likely see these bizarre and 
> frightening things happen in the next minute.

Not to detract from your main point, but I want to point out that
sometimes there is ambiguity about how to count worlds, for example in
the many worlds interpretation of QM.  There are many examples of QM
based world-counting which seem to show that in most worlds, probability
theory should fail.

> Now, here we are, a minute later, and nothing bizarre has happened after 
> all. Does this mean that X-Theory is probably wrong? Perhaps not. After all, 
> the theory did predict with 100% certainty that one version of the world 
> will continue as before. The objection to this will no doubt be, "yes, but 
> how likely is it that WE end up in that particular version?" And the answer 
> to this objection is, "it is 100% certain that WE end up in that particular 
> version; just as it is 100% certain that 999,999 copies of us end up in the 
> bizarre versions". Those 999,999 copies are not continuing to type away as I 
> am, because they are running around in a panic.

Would you agree that those who assume that such an outcome (no bizarre
events) disproves X-theory will be right more often than they are wrong?
Hence adopting such a policy will generally be successful, and beings
who base their decisions on such a rule will become more numerous and
influential in the multiverse.

Even though there are universes where this rule (that is, the rule "a
theory which predicts something we don't see is probably an incorrect
theory") does not work, still it is a good rule to follow.

Hal Finney



Re: Observation selection effects

2004-10-07 Thread Jesse Mazer
Stathis Papaioannou wrote:
Sorry Jesse, I can see in retrospect that I was insulting your intelligence 
as a rhetorical ploy, and we >shouldn't stoop to that level of debate on 
this list.
No problem, I wasn't insulted...
You say that you "must incorporate whatever information you have, but no 
more" in the >envelopes/money example. The point I was trying to make with 
my envelope/drug example is that you >need to take into account the fact 
that the amount in each envelope is fixed
Well, I think that's like saying that in the videotaped coinflip example, 
you need to take into account the fact that the outcome of the flip is 
already fixed. I don't think it matters whether it's "really" fixed or not, 
since probabilities are about your knowledge rather than objective reality 
(they're epistemological, not ontological), and since you are equally 
ignorant of the outcome regardless of whether the flip happens in realtime 
or on video, your probabilistic reasoning should be the same.

But you have passed over the final point in my last post, which I now 
restate:

(1) The original game: envelope A and B, you know one has double the amount 
of the other, but you >don't know which. You open A and find $100. Should 
you switch to B, which may have either $50 or >$200?

(2) A variation: everything is the same, up to the point where you are 
pondering whether to switch to >envelope B, when the millionaire walks in, 
and hidden from view, flips a coin to decide whether to >replace whatever 
was originally in envelope B with either double or half the sum in envelope 
A, i.e. >either $50 or $200.
Well, my argument about the two-envelope paradox all along has been that you 
need to think about the probability distribution the millionaire uses to 
stuff the two envelopes, and that once you do that the apparent paradox 
disappears. So we need to consider what probability distributions the 
millionaire used in these examples. Let's say, for example, that the 
millionaire flips a coin to decide whether to put $50 or $100 in one 
envelope, and then puts double that amount in the other. In that case, if I 
pick an envelope randomly, there is a 1/4 chance I'll find $50 inside, a 1/2 
chance I'll find $100 inside, and a 1/4 chance I'll find $200 inside. If I 
find either $50 or $200, I know with complete certainty how much the other 
envelope contains; but if I find $100, then from my point of view there's a 
1/2 chance the other envelope contains $50 and a 1/2 chance the other 
envelope contains $200.

Now, if you assume that in game (2) the millionaire *only* replaces the 
amount in the second envelope with a new amount based on a coinflip *if* I 
found $100 in the first envelope, but doesn't mess with the second envelope 
if I found $50 or $200 in the first one, then both games are exactly equal, 
from my point of view. In both cases, whenever I find $100 in the first 
envelope, my average expected return from switching would be (0.5)($50) + 
(0.5)($200) = $125, so it's better to switch.

On the other hand, if in game (2) the millionaire replaced the amount in the 
second envelope with a new amount based on a coinflip *regardless* of how 
much I found in the first envelope, this would change my strategy if I found 
either $50 or $200 in the first envelope. In this case, it will be to my 
advantage to switch no matter how much I find in the first envelope, since 
my average expected return from switching will always be 1.25 times however 
much I found in that envelope.

In contrast, in game (1) my average expected return from switching would be 
$100 if I found $50 in the first envelope and $100 if I found $200 in the 
first envelope, while my average expected return from switching if I found 
$100 would still be $125, so my total average expected return from switching 
regardless of what I find in the first envelope is (1/4)($100) + (1/2)($125) 
+ (1/4)($100) = $112.50, while my average expected return from sticking with 
my first choice regardless of how much I find is (1/4)($50) + (1/2)($100) + 
(1/4)($200) = $112.50 as well. Again, in game (1) the resolution of the 
apparent "paradox" must be that for any possible probability distribution 
the millionaire uses to pick the amounts in the envelopes, your average 
expected return from sticking with your first choice if you don't open it to 
see how much is inside must always be equal to your average expected 
winnings if you decide to switch without first checking how much was inside 
your first choice.

Now, which game would you prefer to play, (1) or (2)? They are not the 
same.
With the conditions I mentioned--the millionaire flips a coin to decide 
whether to put $50 or $100 in one envelope, then puts double in the other, 
and in game (2) he only replaces the amount in the second envelope if you 
find $100 in the first envelope you choose--then the two games actually are 
exactly the same, in terms of probabilities and average expected returns. If 
you want to suggest

Re: Observation selection effects

2004-10-07 Thread Stathis Papaioannou
This has been an interesting thread so far, but let me bring it back to 
topic for the Everything List. It has been assumed in most posts to this 
list over the years that our current state must be a "typical" state in some 
sense. For example, our world has followed consistent laws of physics for as 
long as anyone has been able to determine - the old "no white rabbit worlds" 
observation. In the face of ensemble type theories such as the MWI of QM, 
this is seen as presenting a problem: if "anything that can happen, does 
happen", why does our experience of the world include only a very limited, 
orderly subset of this "anything"?

There have been many attempts to answer the above question, eg. see Russell 
Standish' paper "Why Ockham's Razor?" But does our current orderly world 
imply that most possible worlds are orderly? It seems to me that there is an 
asymmetry between (a) what we can expect for the future, and (b) what we can 
deduce about the probability implicit in (a) from what actually does happen.

Suppose that according to X-Theory, in the next minute the world will split 
into one million different versions, of which one version will be the same 
sort of orderly world we are used to, while the rest will be worlds in which 
it will be immediately obvious to us that very strange things are happening, 
eg. dragons materialising out of thin air, furniture levitating, the planet 
Jupiter hurling itself into the sun, etc. I think it is reasonable to expect 
that if X-Theory is correct, we will very likely see these bizarre and 
frightening things happen in the next minute.

Now, here we are, a minute later, and nothing bizarre has happened after 
all. Does this mean that X-Theory is probably wrong? Perhaps not. After all, 
the theory did predict with 100% certainty that one version of the world 
will continue as before. The objection to this will no doubt be, "yes, but 
how likely is it that WE end up in that particular version?" And the answer 
to this objection is, "it is 100% certain that WE end up in that particular 
version; just as it is 100% certain that 999,999 copies of us end up in the 
bizarre versions". Those 999,999 copies are not continuing to type away as I 
am, because they are running around in a panic.

The above is simply a version of the Anthropic Principle as applied to 
intelligent life in the universe. A particular ensemble theory may predict 
that it is overwhelmingly unlikely that a particular universe will allow the 
development of intelligent life. Does the fact that we are here, appparently 
intelligent and alive, count as evidence against that theory? No, because 
the theory predicts that although unlikely, it is certain to happen in at 
least ONE universe, and obviously that universe will be the one we find 
ourselves in.

Stathis Papaioannou
_
In the market for a car? Buy, sell or browse at CarPoint:   
http://server-au.imrworldwide.com/cgi-bin/b?cg=link&ci=ninemsn&tu=http://carpoint.ninemsn.com.au?refid=hotmail_tagline



Re: Observation selection effects

2004-10-07 Thread Stathis Papaioannou
Addition to my last post:
(1) The original game: envelope A and B, you know one has double the amount 
of the other, but you don't know which. You open A and find $100. Should 
you switch to B, which may have either $50 or $200?

(2) A variation: everything is the same, up to the point where you are 
pondering whether to switch to envelope B, when the millionaire walks in, 
and hidden from view, flips a coin to decide whether to replace whatever 
was originally in envelope B with either double or half the sum in envelope 
A, i.e. either $50 or $200.
Say one envelope contains $x and the other $2x. If you keep the first 
envelope in game (2), and if you keep the first one OR switch in game (1), 
you should expect to win $1.5x. If you switch in game (2), you should expect 
to win 0.25*($0.5x + $2x + $x +$4x) = $1.875x.

Stathis Papaioannou
_
Enter our Mobile Babe Search and win big!  http://ninemsn.com.au/babesearch


Re: Observation selection effects

2004-10-07 Thread Stathis Papaioannou

Jesse Mazer wrote:
I don't think that's a good counterargument, because the whole concept of 
probability is based on ignorance...

No, I don't agree! Probability is based in a sense on ignorance, but you 
must make full use of such information as you do have.
Of course--I didn't mean it was based *only* on ignorance, you must 
incorporate whatever information you have into your estimate of the 
probability, but no more. Your argument violates the "but no more" rule, 
since it incorporates the knowledge of an observer who has seen how much 
money both envelopes contain, while I only know how much money one envelope 
contains.
Sorry Jesse, I can see in retrospect that I was insulting your intelligence 
as a rhetorical ploy, and we shouldn't stoop to that level of debate on this 
list.

You say that you "must incorporate whatever information you have, but no 
more" in the envelopes/money example. The point I was trying to make with my 
envelope/drug example is that you need to take into account the fact that 
the amount in each envelope is fixed, but again you are right, it was not 
exactly analogous. But you have passed over the final point in my last post, 
which I now restate:

(1) The original game: envelope A and B, you know one has double the amount 
of the other, but you don't know which. You open A and find $100. Should you 
switch to B, which may have either $50 or $200?

(2) A variation: everything is the same, up to the point where you are 
pondering whether to switch to envelope B, when the millionaire walks in, 
and hidden from view, flips a coin to decide whether to replace whatever was 
originally in envelope B with either double or half the sum in envelope A, 
i.e. either $50 or $200.

Now, which game would you prefer to play, (1) or (2)? They are not the same. 
In game (1), if the $100 in A is actually the higher amount, if you switch 
you will get $50 for sure; but in game (2) if the $100 is actually the 
higher amount you have a 50% chance of getting $200 if you switch. It works 
in the reverse way if the $100 is the lower amount - you could lose $50 
rather than gain $100 - but the possible gain outweighs the possible loss.

Look at it another way: game (2) is actually asymmetrical. The amount you 
win if you play it many times will be different if you switch, because you 
really do have more to gain than to lose by switching (and the millionaire 
will have to pay out more on average). On the other hand, intuitively, you 
can see that your expected gains in game (1) should be the same whether you 
switch or not. The paradox comes from reasoning as if you are playing game 
(2) when you are really playing game (1).

Stathis Papaioannou
_
Discover how everyone & everything in our world's connected:  
http://www.onebigvillage.com.au?&obv1=hotmail



Re: Observation selection effects

2004-10-07 Thread Jesse Mazer
Stathis Papaioannou wrote:
Jesse Mazer wrote:
I don't think that's a good counterargument, because the whole concept of 
probability is based on ignorance...

No, I don't agree! Probability is based in a sense on ignorance, but you 
must make full use of such information as you do have.
Of course--I didn't mean it was based *only* on ignorance, you must 
incorporate whatever information you have into your estimate of the 
probability, but no more. Your argument violates the "but no more" rule, 
since it incorporates the knowledge of an observer who has seen how much 
money both envelopes contain, while I only know how much money one envelope 
contains.

If you toss a fair coin, is Pr(heads)=0.5? According to your argument, it 
could actually be anything between zero and one, because it is possible I 
am lying about it being a fair coin!
My argument implies nothing of the sort. But your argument would seem to 
imply that if I am watching a videotape of a fair coin toss, then if someone 
else has already watched the tape, it would be permissible to incorporate 
their knowledge of the outcome of the toss into a probability calculation, 
even if I myself don't have this knowledge.

Here is another "two envelope" example:
Two envelopes, A and B, contain two doses of the drug Lifesavium, the 
Correct Dose and the Half Dose. If you give the patient more than 1.5 times 
the Correct Dose you will certainly kill him, while if you give him the 
Half Dose you will save his life, although he won't make an immediate 
recovery as he would if you gave him the Correct Dose. If you don't give 
him any medication at all, again, he will surely die. Once you open an 
envelope, the medication in in such a form that you must give the full 
dose, or nothing.

You are faced with the two envelopes, the above information and the sick 
patient, with no other help, on a desert island. There is one further 
complication: if you open the first envelope, and then decide to open the 
second envelope, you must destroy the contents of the first envelope in 
order to get to the second envelope.

OK: so you open envelope A and find that it contains 10mg of Lifesavium. 
You don't know whether this is The Correct Dose or the Half Dose; so 
envelope B may have either 5mg or 20mg, right? And if 10mg is the Correct 
Dose, then if you discard envelope A in favour of envelope B, there is a 
50% chance that envelope B will have double the Correct Dose and you will 
kill the patient - so you had better stick with envelope A, right?

I think you can see the error in the above argument. You already know that 
the amount in each envelope is fixed, so even though you have no idea of 
the actual dosages involved, or which envelope contains which dose, even 
after opening the first envelope, there is NO WAY you can give the patient 
an overdose. There is no way envelope B can contain 20mg of Lifesavium, but 
even though you cannot know this, you can use the above reasoning to deduce 
that there is no expected benefit from choosing a strategy of switching or 
not switching - as you can also see intuitively from the symmetry of the 
situation, whether you choose envelope A or B first.
This case is not analogous to the two-envelope problem, because in this case 
it is part of *my* knowledge that one envelope contains the Correct dose and 
the other contains the Half dose, and neither contains a Double dose. In 
contrast, your analysis of the two-envelope problem relied on information I 
don't have, namely whether the two envelopes contained $50 and $100 or $25 
and $50.

Suppose I know that the envelope-stuffer flipped a fair coin to decide 
whether to put $25 or $50 in one envelope, then put double that amount in 
the other. I randomly choose one envelope and open it, and find $50. Do you 
disagree that my average expected return from switching would now be 
(0.5)(25) + (0.5)(100) = 62.5? If this experiment was repeated many times 
and we looked only at the subset of cases where the first envelope I opened 
contained $50 and I chose to switch, wouldn't my average winnings in this 
subset of cases be $62.50?

In the game with the envelopes and the money, the analogous error is to 
think that there is a possibility of doubling your money when you have 
actually picked the envelope containing the larger sum first.
But *you* don't know that the envelope you picked was the one with the 
larger sum. This is akin to arguing that it is an "error" to think there is 
a possibility of winning if you bet $100 on heads in a videotaped coin toss, 
since someone who's already watched the tape knows it comes up tails, even 
though you don't know that. Would you indeed say it's an error to believe my 
average expected return is $50 in this case?

Jesse



Re: Observation selection effects

2004-10-07 Thread Stathis Papaioannou
Jesse Mazer wrote:
I don't think that's a good counterargument, because the whole concept of 
probability is based on ignorance...

No, I don't agree! Probability is based in a sense on ignorance, but you 
must make full use of such information as you do have. If you toss a fair 
coin, is Pr(heads)=0.5? According to your argument, it could actually be 
anything between zero and one, because it is possible I am lying about it 
being a fair coin!

Here is another "two envelope" example:
Two envelopes, A and B, contain two doses of the drug Lifesavium, the 
Correct Dose and the Half Dose. If you give the patient more than 1.5 times 
the Correct Dose you will certainly kill him, while if you give him the Half 
Dose you will save his life, although he won't make an immediate recovery as 
he would if you gave him the Correct Dose. If you don't give him any 
medication at all, again, he will surely die. Once you open an envelope, the 
medication in in such a form that you must give the full dose, or nothing.

You are faced with the two envelopes, the above information and the sick 
patient, with no other help, on a desert island. There is one further 
complication: if you open the first envelope, and then decide to open the 
second envelope, you must destroy the contents of the first envelope in 
order to get to the second envelope.

OK: so you open envelope A and find that it contains 10mg of Lifesavium. You 
don't know whether this is The Correct Dose or the Half Dose; so envelope B 
may have either 5mg or 20mg, right? And if 10mg is the Correct Dose, then if 
you discard envelope A in favour of envelope B, there is a 50% chance that 
envelope B will have double the Correct Dose and you will kill the patient - 
so you had better stick with envelope A, right?

I think you can see the error in the above argument. You already know that 
the amount in each envelope is fixed, so even though you have no idea of the 
actual dosages involved, or which envelope contains which dose, even after 
opening the first envelope, there is NO WAY you can give the patient an 
overdose. There is no way envelope B can contain 20mg of Lifesavium, but 
even though you cannot know this, you can use the above reasoning to deduce 
that there is no expected benefit from choosing a strategy of switching or 
not switching - as you can also see intuitively from the symmetry of the 
situation, whether you choose envelope A or B first.

In the game with the envelopes and the money, the analogous error is to 
think that there is a possibility of doubling your money when you have 
actually picked the envelope containing the larger sum first. As I said in 
my previous post, if this assumption is valid, then you are playing a 
different game in which our eccentric millionaire flips a coin to decide 
(without telling you which) if he will put double or half the sum you find 
on opening envelope A into envelope B. You would then certainly be better 
off, on average, if you switched envelopes.

Stathis Papaioannou
_
Check out Election 2004 for up-to-date election news, plus voter tools and 
more! http://special.msn.com/msn/election2004.armx



RE: Observation selection effects

2004-10-06 Thread Jesse Mazer
in my last response to Brent Meeker I wrote:
As for your statement that "P(s)=exp(-x) -> P(l)=exp(-x/r)", that can't be 
true. It doesn't make sense that the value of the second probability 
distribution at x would be exp(-x/r), since the range of possible values 
for the amount in that envelope is 0 to infinity, but the integral of 
exp(-x/r) from 0 to infinity is not equal to 1, so that's not a valid 
probability distribution.
thinking a little more about this I realized Brent Meeker was almost right 
about the second probability distribution in this case--it would actually be 
(1/r)*e^(-x/r), so he was just off by a constant factor. In general, if the 
probability distribution for the envelope with the smaller amount is f(x), 
then the probability distribution for the envelope with r times that amount 
should be (1/r)*f(x/r)...this insures that if you integrate f(x) over an 
interval (a,b), giving the probability the smaller envelope contains an 
amount between a and b, then this will be equal to the integral of 
(1/r)*f(x/r) over the interval (r*a, r*b).

Jesse



Re: Observation selection effects

2004-10-06 Thread Jesse Mazer
Stathis Papaioannou wrote:
The problem is that you are reasoning as if the amount in each envelope can 
vary during the game, whereas in fact it is fixed. Suppose envelope A 
contains $100 and envelope B contains $50. You open A, see the $100, and 
then reason that B may contain either $50 or $200, each being equally 
likely. In fact, B cannot contain $200, even though you don't know this 
yet. It is easy enough for an external observer (who does know the contents 
of each envelope) to calculate the probabilities: if you keep the first 
envelope, your expected gain is 0.5*$100 + 0.5*$50 = $75. If you switch, 
your expected gain is 0.5*$100 (if you open B first)  + 0.5*$50 (if you 
open A first) = $75, as before.

Ignorance of the actual amounts may lead you to speculate that one of the 
envelopes may contain $200, but it won't make the money magically 
materialise! And even if you don't know the actual amounts, the above 
analysis should convince you that nothing is to be gained by switching 
envelopes.

If the game changes so that, once you have opened the first envelope, the 
millionaire decides by flipping a coin whether he will put half or double 
that amount in the second envelope, then you are actually better off 
switching.
I don't think that's a good counterargument, because the whole concept of 
probability is based on ignorance--if you were omniscient, for example, you 
wouldn't have a need for probabilities at all. If someone puts $1000 in a 
blue envelope and then flips a coin to decide whether to put $3000 in a red 
envelope or to leave it empty, my expected gain from picking the red 
envelope should be $1500 dollars--it doesn't make sense to say that from the 
point of view of someone who saw the envelopes being stuffed, it is already 
certain whether the red envelope contains the money, therefore *my* expected 
gain from picking the red envelope should be either $3000 or zero. My 
expected gain is based on my own ignorance of the outcome of the coin toss, 
information that I don't have access to shouldn't play a role. Similarly, an 
external observer who knows the content of both envelopes should play no 
role in the calculation of my expected gain from switching in the 
two-envelope problem. If I open the envelope and find $50, I don't know 
whether the other envelope contains $25 or $100, so that information cannot 
be used when calculating my expected gain from switching. But as I argued 
before, if I know the probability distribution the envelope-stuffer used to 
pick the amount in the envelope with less money, then seeing the amount in 
the envelope I open will allow me to refine my estimate of the probability 
it's the envelope with less money, there's no possible distribution the 
envelope-stuffer could use that would insure that no matter how much I found 
in the first envelope, the other envelope would have a 50% chance of 
containing double that and a 50% chance of containing half that.

Jesse



Re: Observation selection effects

2004-10-06 Thread Stathis Papaioannou
Norman Samish writes:
QUOTE-
Assume an eccentric millionaire offers you your choice of either of two
sealed envelopes, A or B, both containing money.  One envelope contains
twice as much as the other.  After you choose an envelope you will have the
option of trading it for the other envelope.
Suppose you pick envelope A.  You open it and see that it contains $100.
Now you have to decide if you will keep the $100, or will you trade it for
whatever is in envelope B?
You might reason as follows: since one envelope has twice what the other one
has, envelope B either has 200 dollars or 50 dollars, with equal
probability.  If you switch, you stand to either win $100 or to lose $50.
Since you stand to win more than you stand to lose, you should switch.
-ENDQUOTE
The problem is that you are reasoning as if the amount in each envelope can 
vary during the game, whereas in fact it is fixed. Suppose envelope A 
contains $100 and envelope B contains $50. You open A, see the $100, and 
then reason that B may contain either $50 or $200, each being equally 
likely. In fact, B cannot contain $200, even though you don't know this yet. 
It is easy enough for an external observer (who does know the contents of 
each envelope) to calculate the probabilities: if you keep the first 
envelope, your expected gain is 0.5*$100 + 0.5*$50 = $75. If you switch, 
your expected gain is 0.5*$100 (if you open B first)  + 0.5*$50 (if you open 
A first) = $75, as before.

Ignorance of the actual amounts may lead you to speculate that one of the 
envelopes may contain $200, but it won't make the money magically 
materialise! And even if you don't know the actual amounts, the above 
analysis should convince you that nothing is to be gained by switching 
envelopes.

If the game changes so that, once you have opened the first envelope, the 
millionaire decides by flipping a coin whether he will put half or double 
that amount in the second envelope, then you are actually better off 
switching.

Stathis Papaioannou
_
Discover how everyone & everything in our world's connected:  
http://www.onebigvillage.com.au?&obv1=hotmail



RE: [Fwd: RE: Observation selection effects]

2004-10-05 Thread Eric Cavalcanti
On Tue, 2004-10-05 at 19:31, Brent Meeker wrote:

> >I always forget to reply-to-all in this list.
> >So below goes my reply which went only to Hal Finney.
> >
> >-Forwarded Message-
> >> From: Eric Cavalcanti <[EMAIL PROTECTED]>

> >> > Think about if the odd number of players was exactly
> >one.  You're guaranteed
> >> > to have the Winning Flip before you switch.
> 
> No, you're guranteed NOT to be in the winning flip.
> 
> >> >
> >> > Then think about what would happen if the odd number
> >of players was three.
> >> > Then you have a 3/4 chance of having the Winning
> >Flip before you switch.
> >> > Only if the other two players' flips both disagree
> >with yours will you not
> >> > have the Winnning Flip, and there is only a 1/4
> >chance of that happening.
> >
> >Exactly.
> >
> >It is interesting to note that, even though you are
> >more likely to be in the Winning Flip, there is no
> >disadvantage in Switching. To understand that, we can
> >look at the N=3 case, and see that if I am in the
> >Winning Flip with someone else, then if I change I
> >will still be in the Winnig Flip with the other person.
> >
> >As opposed to Stathis initial thought, even though the
> >Winning Flip is indeed as likely to be Heads as Tails,
> >each individual is more likely to be in the
> >Winning Flip as in the Losing Flip in any given run.
> >
> >So that this would never make it into a Casino game,
> >because the house would lose money in the long run.

> I think you've confused the definitions of "winning flip" and
> "losing flip".  The "winning flip" is the *minority at the time of
> the flip*  For N=3 you can't be in the winning flip with someone
> else at the time of the flip - but you can switch to it.

Yes, you're right.
Hal and I have confused the definitions. It is still
not a paradox, though. You are more likely to be
in the Losing Flip.

So that this could indeed be a Casino game.

Eric.




RE: Observation selection effects

2004-10-05 Thread Jesse Mazer
-Original Message-
From: Jesse Mazer [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 05, 2004 8:45 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: Observation selection effects
If the range of the smaller amount is infinite,
as in my P(x)=1/e^x
example, then it would no longer make sense to say that
the range of the
larger amount is r times larger.
Sure it does; r*inf=inf.  P(s)=exp(-x) -> P(l)=exp(-x/r)
But it would make just as much sense to say that the second range is 3r 
times wider, since by the same logic 3r*inf=inf. In other words, this step 
in your proof doesn't make sense:

In other words, the range of possible
amounts is such that the larger and smaller amount do not overlap.
Then, for any interval of the range (x,x+dx) for the smaller
amount with probability p, there is a corresponding interval (r*x,
r*x+r*dx) with probability p for the larger amount.  Since the
latter interval is longer by a factor of r
P(l|m)/P(s|m) = r ,
In other words, no matter what m is, it is r-times more likely to
fall in a large-amount interval than in a small-amount interval.
As for your statement that "P(s)=exp(-x) -> P(l)=exp(-x/r)", that can't be 
true. It doesn't make sense that the value of the second probability 
distribution at x would be exp(-x/r), since the range of possible values for 
the amount in that envelope is 0 to infinity, but the integral of exp(-x/r) 
from 0 to infinity is not equal to 1, so that's not a valid probability 
distribution.

Also, now that I think more about it I'm not even sure the step in your 
proof I quoted above actually makes sense even in the case of a probability 
distribution with finite range. What exactly does the equation 
"P(l|m)/P(s|m) = r" mean, anyway? It can't mean that if I choose an envelope 
at random, before I even open it I can say that the amount m inside is r 
times more likely to have been picked from the larger distribution, since I 
know there is a 50% chance I will pick the envelope whose amount was picked 
from the larger distribution. Is it supposed to mean that if we let the 
number of trials go to infinity and then look at the subset of trials where 
the envelope I opened contained m dollars, it is r times more likely that 
the envelope was picked from the larger distribution on any given trial? 
This can't be true for every specific m--for example, if the smaller 
distribution had a range of 0 to 100 and the larger had a range of 0 to 200, 
if I set m=150, then in every single trial where I found 150 dollars in the 
envelope it must have been selected from the larger  distribution. You could 
do a weighted average over all possible values of m, like "integral over all 
possible values of m of P('I found m dollars in the envelope I 
selected')*P('the envelope I selected had an amount taken from the smaller 
distribution' | 'I found m dollars in the envelope I selected'), which you 
could write as "integral over m of P(m)*P(s|m)", but I don't think it would 
be true that the ratio "integral over m of P(m)*P(l|m)"/"integral over m of 
P(m)*P(s|m)" would be equal to r, in fact I think both integrals would 
always come out to 1/2 so the ratio would always be 1...and even if I'm 
wrong, replacing P(l|m)/P(s|m) with this ratio of integrals would mess up 
the rest of your proof.

Jesse



RE: Observation selection effects

2004-10-05 Thread Brent Meeker


>-Original Message-
>From: Jesse Mazer [mailto:[EMAIL PROTECTED]
>Sent: Tuesday, October 05, 2004 8:45 PM
>To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>Subject: RE: Observation selection effects
>
>
>Brent Meeker wrote:
>
>>>-Original Message-
>>>From: Jesse Mazer [mailto:[EMAIL PROTECTED]
>>>Sent: Tuesday, October 05, 2004 6:33 PM
>>>To: [EMAIL PROTECTED]
>>>Cc: [EMAIL PROTECTED]
>>>Subject: RE: Observation selection effects
>>>
>>>Brent Meeker wrote:
>>>
>>>>On reviewing my analysis (I hadn't looked at for about four
>>>>years), I think it works without the restrictive
>assumption that
>>>>the range of distirbutions not overlap.  It's still
>>>necessary that
>>>>P(l|m)+P(s|m)=1 and P(l|m)/P(s|m)=r, which is all that
>>>is required
>>>>for the proof to go thru.
>>>
>>>Hmm, I think I misread your analysis, I had somehow
>>>gotten the idea that you
>>>were assuming a uniform probability of the
>>>envelope-stuffer picking any
>>>number between x1 and x2 for the first envelope, but I
>>>see this isn't
>>>necessary, so your proof is a lot more general than I
>>>thought. Still not
>>>completely general though, because the envelope-stuffer
>>>can also use a
>>>distribution which has no upper bound x2 on possible
>>>amounts to put in the
>>>first envelope, like the one I mentioned in my last post:
>>>
>>>>For example, he could use a
>>>>distribution that
>>>>gives him a 1/2 probability of putting between 0 and 1
>>>>dollars in one
>>>>envelope (assume the dollar amounts can take any
>>>>positive real value, and he
>>>>uses a flat probability distribution to pick a number
>>>>between 0 and 1), a
>>>>1/4 probability of putting in between 1 and 2 dollars, a
>>>>1/8 probability of
>>>>putting in between 2 and 3 dollars, and in general a
>>>>1/2^n probability of
>>>>putting in between n-1 and n dollars. This would insure
>>>>there was some
>>>>finite probability that *any* positive real number could
>>>>be found in either
>>>>envelope.
>>>
>>>Likewise, he could also use the continuous probability
>>>distribution P(x) =
>>>1/e^x (whose integral from 0 to infinity is 1). And if
>>>you want to restrict
>>>the amounts in the envelope to positive integers, he
>could use a
>>>distribution which gives a 1/2^n probability of putting
>>>exactly n dollars in
>>>the first envelope.
>>>
>>>Jesse
>>
>>That doesn't matter if I can do without the no-overlap
>>assumption - which I think I can.  Do you see a flaw?
>>
>>When I first did it I was drawing pictures of
>distributions and I
>>thought I needed non-overlap to assert that P(l|m)/P(s|m)=r and
>>P(l|m)+P(s|m)=1.  But now it seems that the last
>follows just from
>>the fact that the amount was either from the larger or the
>>smaller.  The ratio doesn't depend on the ranges not
>overlapping,
>>it just depends on the fact that the larger amount's
>distribution
>>must be a copy of the smaller amount's distribution
>stretched by a
>>factor of r.
>
>But in order for the range of the larger amount to be
>double that of the
>smaller amount, you need to assume the range of the
>smaller amount is
>finite.

I only had to assume it was finite to avoid overlap in the ranges.
Dropping that assumption the range of the lower amount can be
infinite.

>If the range of the smaller amount is infinite,
>as in my P(x)=1/e^x
>example, then it would no longer make sense to say that
>the range of the
>larger amount is r times larger.

Sure it does; r*inf=inf.  P(s)=exp(-x) -> P(l)=exp(-x/r)

>
>Also, what if the envelope-stuffer is only picking from
>a finite set of
>numbers rather than a continuous range? For example, he
>might have a 1/3
>chance of putting 100, 125 or 150 dollars in the first
>envelope, and then he
>would double that amount for the second envelope. In
>this case, your
>assumption "no matter what m is, it is r-times more
>likely to fall in a
>large-amount interval than in a small-amount interval"
>wouldn't seem to be
>valid, since there are only six possible values of m
>here (100, 125, 150,
>200, 250, or 300) and three of them are in the smaller range.

Yes, that's true.  The proof depends on smooth, integrable
distributions.

Brent



RE: Observation selection effects

2004-10-05 Thread Jesse Mazer
Brent Meeker wrote:
-Original Message-
From: Jesse Mazer [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 05, 2004 6:33 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: RE: Observation selection effects
Brent Meeker wrote:
On reviewing my analysis (I hadn't looked at for about four
years), I think it works without the restrictive assumption that
the range of distirbutions not overlap.  It's still
necessary that
P(l|m)+P(s|m)=1 and P(l|m)/P(s|m)=r, which is all that
is required
for the proof to go thru.
Hmm, I think I misread your analysis, I had somehow
gotten the idea that you
were assuming a uniform probability of the
envelope-stuffer picking any
number between x1 and x2 for the first envelope, but I
see this isn't
necessary, so your proof is a lot more general than I
thought. Still not
completely general though, because the envelope-stuffer
can also use a
distribution which has no upper bound x2 on possible
amounts to put in the
first envelope, like the one I mentioned in my last post:
For example, he could use a
distribution that
gives him a 1/2 probability of putting between 0 and 1
dollars in one
envelope (assume the dollar amounts can take any
positive real value, and he
uses a flat probability distribution to pick a number
between 0 and 1), a
1/4 probability of putting in between 1 and 2 dollars, a
1/8 probability of
putting in between 2 and 3 dollars, and in general a
1/2^n probability of
putting in between n-1 and n dollars. This would insure
there was some
finite probability that *any* positive real number could
be found in either
envelope.
Likewise, he could also use the continuous probability
distribution P(x) =
1/e^x (whose integral from 0 to infinity is 1). And if
you want to restrict
the amounts in the envelope to positive integers, he could use a
distribution which gives a 1/2^n probability of putting
exactly n dollars in
the first envelope.
Jesse
That doesn't matter if I can do without the no-overlap
assumption - which I think I can.  Do you see a flaw?
When I first did it I was drawing pictures of distributions and I
thought I needed non-overlap to assert that P(l|m)/P(s|m)=r and
P(l|m)+P(s|m)=1.  But now it seems that the last follows just from
the fact that the amount was either from the larger or the
smaller.  The ratio doesn't depend on the ranges not overlapping,
it just depends on the fact that the larger amount's distribution
must be a copy of the smaller amount's distribution stretched by a
factor of r.
But in order for the range of the larger amount to be double that of the 
smaller amount, you need to assume the range of the smaller amount is 
finite. If the range of the smaller amount is infinite, as in my P(x)=1/e^x 
example, then it would no longer make sense to say that the range of the 
larger amount is r times larger.

Also, what if the envelope-stuffer is only picking from a finite set of 
numbers rather than a continuous range? For example, he might have a 1/3 
chance of putting 100, 125 or 150 dollars in the first envelope, and then he 
would double that amount for the second envelope. In this case, your 
assumption "no matter what m is, it is r-times more likely to fall in a 
large-amount interval than in a small-amount interval" wouldn't seem to be 
valid, since there are only six possible values of m here (100, 125, 150, 
200, 250, or 300) and three of them are in the smaller range.

Jesse



[Fwd: Re: Observation selection effects]

2004-10-05 Thread Danny Mayes






 Original Message 

  

  Subject: 
  Re: Observation selection effects


  Date: 
  Sat, 04 Sep 2004 02:29:54 -0400


  From: 
  Danny Mayes <[EMAIL PROTECTED]>


  To: 
  [EMAIL PROTECTED]


  References: 
  <[EMAIL PROTECTED]>

  



These problems remind me of the infamous Monty Hall problem that got 
Marilyn vos Savant in some controversy.  Someone wrote and asked the 
following question:

You are on "lets make a deal", and are chosen to select a door among 3 
doors, one of which has a car behind it.  You randomly select door 1.  
Monty, knowing where the car is, opens door 2 revealing an empty room, 
and asks if you want to stay with door one.  The question was:  Is there 
any benefit in switching from door 1 to door 3.  Common sense would 
suggest Monty simply eliminated one choice, and you have a 50-50 chance 
either way.  Marylin argued that by switching, the contestant actually 
increases his odds from 1/3 to 2/3.  The difference coming about through 
the added information of the car not behind door 2.  This example is 
discussed in the book "Information:  The New Language of Science" by 
Hans Christian von Baeyer, which I am trying to read, but only getting 
through bits and pieces as usual due to my work schedule.  According to 
the book, vos Savant still gets mail arguing her position on this 
matter.  It seems to me it would be very easy to resolve with a friend, 
letting one person play Monty and then keeping a tally of your success 
in switching vs. not switching (though I haven't tried this- my wife 
didn't find it intriguing enough, unfortunately).

I think these games provide good examples of how our common sense often 
works against a deep understanding of what is really going on around 
here.  I also think they point to a very fundamental level of importance 
of the role of information in understanding the way our world  (or 
multiverse) works



Jesse Mazer wrote:

> Norman Samish:
>
>> The "Flip-Flop" game described by Stathis Papaioannou strikes me as a
>> version of the old Two-Envelope Paradox.
>>
>> Assume an eccentric millionaire offers you your choice of either of two
>> sealed envelopes, A or B, both containing money.  One envelope contains
>> twice as much as the other.  After you choose an envelope you will 
>> have the
>> option of trading it for the other envelope.
>>
>> Suppose you pick envelope A.  You open it and see that it contains $100.
>> Now you have to decide if you will keep the $100, or will you trade 
>> it for
>> whatever is in envelope B?
>>
>> You might reason as follows: since one envelope has twice what the 
>> other one
>> has, envelope B either has 200 dollars or 50 dollars, with equal
>> probability.  If you switch, you stand to either win $100 or to lose 
>> $50.
>> Since you stand to win more than you stand to lose, you should switch.
>>
>> But just before you tell the eccentric millionaire that you would 
>> like to
>> switch, another thought might occur to you.  If you had picked 
>> envelope B,
>> you would have come to exactly the same conclusion.  So if the above
>> argument is valid, you should switch no matter which envelope you 
>> choose.
>>
>> Therefore the argument for always switching is NOT valid - but I am 
>> unable,
>> at the moment, to tell you why!
>>
>
> Basically, I think the resolution of this paradox is that it's 
> impossible to pick a number randomly from 0 to infinity in such a way 
> that every number is equally likely to come up. Such an infinite flat 
> probability distribution would lead to paradoxical conclusions--for 
> example, if you picked two positive integers randomly from a flat 
> probability distribution, and then looked at the first integer, then 
> there would be a 100% chance the second integer would be larger, since 
> there are only a finite number of integers smaller than or equal to 
> the first one and an infinite number that are larger.
>
> For any logically possible probability distribution the millionaire 
> uses, it will be true that depending on what amount of money you find 
> in the first envelope, there won't always be an equal chance of 
> finding double the amount or half the amount in the other envelope. 
> For example, if the millionaire simply picks a random amount from 0 to 
> one million to put in the first envelope, and then flips a coin to 
> decide whether to put half or double that in the other envelope, then 
> if the first envelope contains more than one million there is a 100% 
> chance the other envelope contains less than that.
>
> For a more deta

Re: Observation selection effects

2004-10-05 Thread John M
Dear Brent,

I enjoyed your description which was quite a show for my totally qualitative
(and IN-formally intuitive) logic - how you, quanti people substitute common
sense for those letters and numbers.
Norman's paradox is an orthodox paradox and you made it into a metadox (no
paradox).
Have a good day

John Mikes
PS: to excuse my lingo: my 1st Ph.D. was Chemistry-Physics-Math. J
- Original Message -
From: "Brent Meeker" <[EMAIL PROTECTED]>
To: "Everything-List" <[EMAIL PROTECTED]>
Sent: Monday, October 04, 2004 6:19 PM
Subject: RE: Observation selection effects


> >-Original Message-
> >Norman Samish:
> >
> >>The "Flip-Flop" game described by Stathis Papaioannou
> >strikes me as a
> >>version of the old Two-Envelope Paradox.
> >>
> >>Assume an eccentric millionaire offers you your choice
> >of either of two
> >>sealed envelopes, A or B, both containing money.  One
> >envelope contains
> >>twice as much as the other.  After you choose an
> >envelope you will have the
> >>option of trading it for the other envelope.
> >>
> >>Suppose you pick envelope A.  You open it and see that
> >it contains $100.
> >>Now you have to decide if you will keep the $100, or
> >will you trade it for
> >>whatever is in envelope B?
> >>
> >>You might reason as follows: since one envelope has
> >twice what the other
> >>one
> >>has, envelope B either has 200 dollars or 50 dollars, with equal
> >>probability.  If you switch, you stand to either win
> >$100 or to lose $50.
> >>Since you stand to win more than you stand to lose, you
> >should switch.
> >>
> >>But just before you tell the eccentric millionaire that
> >you would like to
> >>switch, another thought might occur to you.  If you had
> >picked envelope B,
> >>you would have come to exactly the same conclusion.  So
> >if the above
> >>argument is valid, you should switch no matter which
> >envelope you choose.
> >>
> >>Therefore the argument for always switching is NOT
> >valid - but I am unable,
> >>at the moment, to tell you why!
>
>
> Of course in the real world you have some idea about how much
> money is in play so if you see a very large amount you infer it's
> probably the larger amount.  But even without this assumption of
> realism it's an interesting problem and taken as stated there's
> still no paradox.  I saw this problem several years ago and here's
> my solution.  It takes the problem as stated, but I do make one
> small additional restrictive assumption:
>
> Let:  s = envelope with smaller amount is selected.
>   l = envelope with larger amount is selected.
>   m = the amount in the selected envelope.
>
> Since any valid resolution of the paradox would have to work for
> ratios of money other than two, also define:
>
>   r = the ratio of the larger amount to the smaller.
>
> Now here comes the restrictive assumption, which can be thought of
> as a restrictive rule about how the amounts are chosen which I
> hope to generalize away later.  Expressed as a rule, it is this:
>
>   The person putting in the money selects, at random (not
> necessarily uniformly), the smaller amount from a range (x1, x2)
> such that x2 < r*x1.  In other words, the range of possible
> amounts is such that the larger and smaller amount do not overlap.
> Then, for any interval of the range (x,x+dx) for the smaller
> amount with probability p, there is a corresponding interval (r*x,
> r*x+r*dx) with probability p for the larger amount.  Since the
> latter interval is longer by a factor of r
>
>  P(l|m)/P(s|m) = r ,
>
> In other words, no matter what m is, it is r-times more likely to
> fall in a large-amount interval than in a small-amount interval.
>
> But since l and s are the only possibilities (and here's where I
> need the non-overlap),
>
>  P(1|m) + P(s|m) = 1
>
> which implies,
>
>  P(s|m) = 1/(1+r)  and P(1|m) = r/(1+r) .
>
> Then the rest is straightforward algebra. The expected values are:
>
>   E(don't switch) = m
>
>   E(switch) = P(s|m)rm + P(l|m)m/r
> = [1/(1+r)]rm + [r/(1+r)]m/r
> = m
>
>  and no paradox.
>
> Brent Meeker
>




RE: Observation selection effects

2004-10-05 Thread Jesse Mazer
Brent Meeker wrote:
Of course in the real world you have some idea about how much
money is in play so if you see a very large amount you infer it's
probably the larger amount.  But even without this assumption of
realism it's an interesting problem and taken as stated there's
still no paradox.  I saw this problem several years ago and here's
my solution.  It takes the problem as stated, but I do make one
small additional restrictive assumption:
Let:  s = envelope with smaller amount is selected.
  l = envelope with larger amount is selected.
 m = the amount in the selected envelope.
Since any valid resolution of the paradox would have to work for
ratios of money other than two, also define:
  r = the ratio of the larger amount to the smaller.
Now here comes the restrictive assumption, which can be thought of
as a restrictive rule about how the amounts are chosen which I
hope to generalize away later.  Expressed as a rule, it is this:
  The person putting in the money selects, at random (not
necessarily uniformly), the smaller amount from a range (x1, x2)
such that x2 < r*x1.  In other words, the range of possible
amounts is such that the larger and smaller amount do not overlap.
Then, for any interval of the range (x,x+dx) for the smaller
amount with probability p, there is a corresponding interval (r*x,
r*x+r*dx) with probability p for the larger amount.  Since the
latter interval is longer by a factor of r
 P(l|m)/P(s|m) = r ,
In other words, no matter what m is, it is r-times more likely to
fall in a large-amount interval than in a small-amount interval.
But since l and s are the only possibilities (and here's where I
need the non-overlap),
 P(1|m) + P(s|m) = 1
which implies,
 P(s|m) = 1/(1+r)  and P(1|m) = r/(1+r) .
Then the rest is straightforward algebra. The expected values are:
   E(don't switch) = m
  E(switch) = P(s|m)rm + P(l|m)m/r
= [1/(1+r)]rm + [r/(1+r)]m/r
   = m
 and no paradox.
This is right, but it's a pretty special case--there are an infinite number 
of possible probability distributions the millionaire could use when 
deciding how much money to put in one envelope, even if we assume he always 
puts double in the other. For example, he could use a distribution that 
gives him a 1/2 probability of putting between 0 and 1 dollars in one 
envelope (assume the dollar amounts can take any positive real value, and he 
uses a flat probability distribution to pick a number between 0 and 1), a 
1/4 probability of putting in between 1 and 2 dollars, a 1/8 probability of 
putting in between 2 and 3 dollars, and in general a 1/2^n probability of 
putting in between n-1 and n dollars. This would insure there was some 
finite probability that *any* positive real number could be found in either 
envelope.

The basic paradox is that the argument tries to show that the average 
expected payoff from picking the second envelope is higher than the average 
expected payoff from sticking with the first one, *regardless of what amount 
you found in the first envelope*--in other words, even without opening the 
first envelope you'd be better off switching to the second, which doesn't 
make sense since the envelopes are identical and your first pick was random. 
But it's not actually possible that, regardless of what you found in the 
first envelope, there would always be a 50% chance the other envelope 
contained half that and a 50% chance it contained double that...for that to 
be true, the amount in the first envelope would have to be picked using a 
flat probability distribution which is equally likely to give any number 
from 0 to infinity, and as I said that's impossible. But my argument was not 
really sufficiently general either, because it doesn't rule out other 
possibilities like a 55% chance the other envelope contained half what was 
found in the first envelope and a 45% chance it contained double, in which 
case your average expected payoff would still be higher if you switched.

A truly general argument would have to show that, for any logically possible 
probability distribution the millionaire uses to pick the amounts in the 
envelopes, the average expected payoff from switching will always be exactly 
equal to the average expected winnings from sticking with your first choice. 
There are two different ways this can be true:

Possibility #1: it may be that you know enough about the probability 
distribution that opening the envelope and seeing how much is inside allows 
you to refine your evaluation of the average expected payoff from switching. 
I gave an example of this in my post, where the millionaire picks an amount 
from 1 to a million to put in one envelope and puts double that in the 
other; in that case, if you open your first pick and find an amount greater 
than a million, the average expected payoff from switching is 0. But even if 
the average expected payoff may vary depending on what you find in the first

RE: Observation selection effects

2004-10-05 Thread Stathis Papaioannou
Thanks Hal, you're right, of course (except that you have transposed Winning 
Flip for Losing Flip). The fact that you know the result of your own coin 
flip changes the probabilities - it is no longer 50/50, and the smaller the 
number of participants, the more obvious this effect becomes. This is the 
same effect noted by Eric Cavalcanti in his post yesterday (4/10/04), and 
applies to the room and the traffic examples as well. A little 
disappointing, perhaps: there isn't a paradox after all. I have been 
inspired by this thread to order Nick Bostrom's book (unreasonably expensive 
though it is, in my opinion), which is based on his PhD thesis and discusses 
the self-sampling assumption as applied to, among many other things, the 
infuriating Doomsday Argument.

Stathis Papaioannou

From: [EMAIL PROTECTED] ("Hal Finney")
To: [EMAIL PROTECTED]
Subject: RE: Observation selection effects
Date: Mon,  4 Oct 2004 17:20:49 -0700 (PDT)
Stathis Papaioannou writes:
> In the new casino game Flip-Flop, an odd number of players pays $1 each 
to
> individually flip a coin, so that no player can see what another player 
is
> doing. The game organisers then tally up the results, and the result in 
the
> minority is called the Winning Flip, while the majority result is called 
the
> Losing Flip. Before the Winning Flip is announced, each player has the
> opportunity to either keep their initial result, or to Switch; this is 
then
> called the player's Final Flip. When the Winning Flip is announced, 
players
> whose Final Flip corresponds with this are paid $2 by the casino, while 
the
> rest are paid nothing.

Think about if the odd number of players was exactly one.  You're 
guaranteed
to have the Winning Flip before you switch.

Then think about what would happen if the odd number of players was three.
Then you have a 3/4 chance of having the Winning Flip before you switch.
Only if the other two players' flips both disagree with yours will you not
have the Winnning Flip, and there is only a 1/4 chance of that happening.
Hal Finney
_
Find love today with ninemsn personals. Click here:  
http://ninemsn.match.com?referrer=hotmailtagline



[Fwd: RE: Observation selection effects]

2004-10-05 Thread Eric Cavalcanti
I always forget to reply-to-all in this list.
So below goes my reply which went only to Hal Finney.

-Forwarded Message-
> From: Eric Cavalcanti <[EMAIL PROTECTED]>
> To: "Hal Finney" <[EMAIL PROTECTED]>
> Subject: RE: Observation selection effects
> Date: Tue, 05 Oct 2004 12:57:14 +1000
> 
> On Tue, 2004-10-05 at 10:20, "Hal Finney" wrote:
> > Stathis Papaioannou writes:
> > > In the new casino game Flip-Flop, an odd number of players pays $1 each to 
> > > individually flip a coin, so that no player can see what another player is 
> > > doing. The game organisers then tally up the results, and the result in the 
> > > minority is called the Winning Flip, while the majority result is called the 
> > > Losing Flip. Before the Winning Flip is announced, each player has the 
> > > opportunity to either keep their initial result, or to Switch; this is then 
> > > called the player's Final Flip. When the Winning Flip is announced, players 
> > > whose Final Flip corresponds with this are paid $2 by the casino, while the 
> > > rest are paid nothing.
> > 
> > Think about if the odd number of players was exactly one.  You're guaranteed
> > to have the Winning Flip before you switch.
> > 
> > Then think about what would happen if the odd number of players was three.
> > Then you have a 3/4 chance of having the Winning Flip before you switch.
> > Only if the other two players' flips both disagree with yours will you not
> > have the Winnning Flip, and there is only a 1/4 chance of that happening.

Exactly.

It is interesting to note that, even though you are
more likely to be in the Winning Flip, there is no
disadvantage in Switching. To understand that, we can
look at the N=3 case, and see that if I am in the
Winning Flip with someone else, then if I change I
will still be in the Winnig Flip with the other person.

As opposed to Stathis initial thought, even though the
Winning Flip is indeed as likely to be Heads as Tails,
each individual is more likely to be in the
Winning Flip as in the Losing Flip in any given run.

So that this would never make it into a Casino game,
because the house would lose money in the long run.

Eric.



RE: Observation selection effects

2004-10-04 Thread Brent Meeker
>-Original Message-
>Norman Samish:
>
>>The "Flip-Flop" game described by Stathis Papaioannou
>strikes me as a
>>version of the old Two-Envelope Paradox.
>>
>>Assume an eccentric millionaire offers you your choice
>of either of two
>>sealed envelopes, A or B, both containing money.  One
>envelope contains
>>twice as much as the other.  After you choose an
>envelope you will have the
>>option of trading it for the other envelope.
>>
>>Suppose you pick envelope A.  You open it and see that
>it contains $100.
>>Now you have to decide if you will keep the $100, or
>will you trade it for
>>whatever is in envelope B?
>>
>>You might reason as follows: since one envelope has
>twice what the other
>>one
>>has, envelope B either has 200 dollars or 50 dollars, with equal
>>probability.  If you switch, you stand to either win
>$100 or to lose $50.
>>Since you stand to win more than you stand to lose, you
>should switch.
>>
>>But just before you tell the eccentric millionaire that
>you would like to
>>switch, another thought might occur to you.  If you had
>picked envelope B,
>>you would have come to exactly the same conclusion.  So
>if the above
>>argument is valid, you should switch no matter which
>envelope you choose.
>>
>>Therefore the argument for always switching is NOT
>valid - but I am unable,
>>at the moment, to tell you why!


Of course in the real world you have some idea about how much
money is in play so if you see a very large amount you infer it's
probably the larger amount.  But even without this assumption of
realism it's an interesting problem and taken as stated there's
still no paradox.  I saw this problem several years ago and here's
my solution.  It takes the problem as stated, but I do make one
small additional restrictive assumption:

Let:  s = envelope with smaller amount is selected.
  l = envelope with larger amount is selected.
  m = the amount in the selected envelope.

Since any valid resolution of the paradox would have to work for
ratios of money other than two, also define:

  r = the ratio of the larger amount to the smaller.

Now here comes the restrictive assumption, which can be thought of
as a restrictive rule about how the amounts are chosen which I
hope to generalize away later.  Expressed as a rule, it is this:

  The person putting in the money selects, at random (not
necessarily uniformly), the smaller amount from a range (x1, x2)
such that x2 < r*x1.  In other words, the range of possible
amounts is such that the larger and smaller amount do not overlap.
Then, for any interval of the range (x,x+dx) for the smaller
amount with probability p, there is a corresponding interval (r*x,
r*x+r*dx) with probability p for the larger amount.  Since the
latter interval is longer by a factor of r

 P(l|m)/P(s|m) = r ,

In other words, no matter what m is, it is r-times more likely to
fall in a large-amount interval than in a small-amount interval.

But since l and s are the only possibilities (and here's where I
need the non-overlap),

 P(1|m) + P(s|m) = 1

which implies,

 P(s|m) = 1/(1+r)  and P(1|m) = r/(1+r) .

Then the rest is straightforward algebra. The expected values are:

  E(don't switch) = m

  E(switch) = P(s|m)rm + P(l|m)m/r
= [1/(1+r)]rm + [r/(1+r)]m/r
= m

 and no paradox.

Brent Meeker



Re: Observation selection effects

2004-10-04 Thread Jesse Mazer
Norman Samish:
The "Flip-Flop" game described by Stathis Papaioannou strikes me as a
version of the old Two-Envelope Paradox.
Assume an eccentric millionaire offers you your choice of either of two
sealed envelopes, A or B, both containing money.  One envelope contains
twice as much as the other.  After you choose an envelope you will have the
option of trading it for the other envelope.
Suppose you pick envelope A.  You open it and see that it contains $100.
Now you have to decide if you will keep the $100, or will you trade it for
whatever is in envelope B?
You might reason as follows: since one envelope has twice what the other 
one
has, envelope B either has 200 dollars or 50 dollars, with equal
probability.  If you switch, you stand to either win $100 or to lose $50.
Since you stand to win more than you stand to lose, you should switch.

But just before you tell the eccentric millionaire that you would like to
switch, another thought might occur to you.  If you had picked envelope B,
you would have come to exactly the same conclusion.  So if the above
argument is valid, you should switch no matter which envelope you choose.
Therefore the argument for always switching is NOT valid - but I am unable,
at the moment, to tell you why!
Basically, I think the resolution of this paradox is that it's impossible to 
pick a number randomly from 0 to infinity in such a way that every number is 
equally likely to come up. Such an infinite flat probability distribution 
would lead to paradoxical conclusions--for example, if you picked two 
positive integers randomly from a flat probability distribution, and then 
looked at the first integer, then there would be a 100% chance the second 
integer would be larger, since there are only a finite number of integers 
smaller than or equal to the first one and an infinite number that are 
larger.

For any logically possible probability distribution the millionaire uses, it 
will be true that depending on what amount of money you find in the first 
envelope, there won't always be an equal chance of finding double the amount 
or half the amount in the other envelope. For example, if the millionaire 
simply picks a random amount from 0 to one million to put in the first 
envelope, and then flips a coin to decide whether to put half or double that 
in the other envelope, then if the first envelope contains more than one 
million there is a 100% chance the other envelope contains less than that.

For a more detailed discussion of the two-envelope paradox, see this page:
http://jamaica.u.arizona.edu/~chalmers/papers/envelope.html
I don't think the solution to this paradox has any relation to the solution 
to the flip-flop game, though. In the case of the flip-flop game, it may 
help to assume that the players are all robots, and that each player can 
assume that whatever decision it makes about whether to switch or not, there 
is a 100% chance that all the other players will follow the same line of 
reasoning and come to an identical decision. In this case, since the money 
is awarded to the minority flip, it's clear that it's better to switch, 
since if everyone switches more of them will win. This problem actually 
reminds me more of Newcomb's paradox, described at 
http://slate.msn.com/?id=2061419 , because it depends on whether you assume 
your choice is absolutely independent of choices made by other minds or if 
you should act as though the choice you make can "cause" another mind to 
make a certain choice even if there is no actual interaction between you.

Jesse



Re: Observation selection effects

2004-10-04 Thread Norman Samish
The "Flip-Flop" game described by Stathis Papaioannou strikes me as a 
version of the old Two-Envelope Paradox.

Assume an eccentric millionaire offers you your choice of either of two 
sealed envelopes, A or B, both containing money.  One envelope contains 
twice as much as the other.  After you choose an envelope you will have the 
option of trading it for the other envelope.

Suppose you pick envelope A.  You open it and see that it contains $100. 
Now you have to decide if you will keep the $100, or will you trade it for 
whatever is in envelope B?

You might reason as follows: since one envelope has twice what the other one 
has, envelope B either has 200 dollars or 50 dollars, with equal 
probability.  If you switch, you stand to either win $100 or to lose $50. 
Since you stand to win more than you stand to lose, you should switch.

But just before you tell the eccentric millionaire that you would like to 
switch, another thought might occur to you.  If you had picked envelope B, 
you would have come to exactly the same conclusion.  So if the above 
argument is valid, you should switch no matter which envelope you choose.

Therefore the argument for always switching is NOT valid - but I am unable, 
at the moment, to tell you why!

Norman Samish

- Original Message - 
From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, October 04, 2004 5:43 PM
Subject: RE: Observation selection effects


Here is another version of the paradox, where the way an individual chooses
does not change the initial probabilities:

In the new casino game Flip-Flop, an odd number of players pays $1 each to
individually flip a coin, so that no player can see what another player is
doing. The game organisers then tally up the results, and the result in the
minority is called the Winning Flip, while the majority result is called the
Losing Flip. Before the Winning Flip is announced, each player has the
opportunity to either keep their initial result, or to Switch; this is then
called the player's Final Flip. When the Winning Flip is announced, players
whose Final Flip corresponds with this are paid $2 by the casino, while the
rest are paid nothing.

The question: if you participate in this game, is there any advantage in
Switching? On the one hand, it seems clear that the Winning Flip is as
likely to be heads as tails, so if you played this game repeatedly, in the
long run you should break even, whether you Switch or not. On the other
hand, it seems equally clear that if all the players Switch, the casino will
end up every time paying out more than it collects, so Switching should be a
winning strategy, on average, for each individual player.

I'm sure there is something wrong with the above conclusion. What is it? And
I haven't really thought this through yet, but does this have any bearing on
the self sampling assumption as applied in the Doomsday Argument etc.?

Stathis Papaioannou 




RE: Observation selection effects

2004-10-04 Thread "Hal Finney"
Stathis Papaioannou writes:
> In the new casino game Flip-Flop, an odd number of players pays $1 each to 
> individually flip a coin, so that no player can see what another player is 
> doing. The game organisers then tally up the results, and the result in the 
> minority is called the Winning Flip, while the majority result is called the 
> Losing Flip. Before the Winning Flip is announced, each player has the 
> opportunity to either keep their initial result, or to Switch; this is then 
> called the player's Final Flip. When the Winning Flip is announced, players 
> whose Final Flip corresponds with this are paid $2 by the casino, while the 
> rest are paid nothing.

Think about if the odd number of players was exactly one.  You're guaranteed
to have the Winning Flip before you switch.

Then think about what would happen if the odd number of players was three.
Then you have a 3/4 chance of having the Winning Flip before you switch.
Only if the other two players' flips both disagree with yours will you not
have the Winnning Flip, and there is only a 1/4 chance of that happening.

Hal Finney



RE: Observation selection effects

2004-10-04 Thread Stathis Papaioannou
Here is another version of the paradox, where the way an individual chooses 
does not change the initial probabilities:

In the new casino game Flip-Flop, an odd number of players pays $1 each to 
individually flip a coin, so that no player can see what another player is 
doing. The game organisers then tally up the results, and the result in the 
minority is called the Winning Flip, while the majority result is called the 
Losing Flip. Before the Winning Flip is announced, each player has the 
opportunity to either keep their initial result, or to Switch; this is then 
called the player's Final Flip. When the Winning Flip is announced, players 
whose Final Flip corresponds with this are paid $2 by the casino, while the 
rest are paid nothing.

The question: if you participate in this game, is there any advantage in 
Switching? On the one hand, it seems clear that the Winning Flip is as 
likely to be heads as tails, so if you played this game repeatedly, in the 
long run you should break even, whether you Switch or not. On the other 
hand, it seems equally clear that if all the players Switch, the casino will 
end up every time paying out more than it collects, so Switching should be a 
winning strategy, on average, for each individual player.

I'm sure there is something wrong with the above conclusion. What is it? And 
I haven't really thought this through yet, but does this have any bearing on 
the self sampling assumption as applied in the Doomsday Argument etc.?

Stathis Papaioannou
_
Searching for that dream home? Try   http://ninemsn.realestate.com.au  for 
all your property needs.



RE: Observation selection effects

2004-10-03 Thread Eric Cavalcanti
On Mon, 2004-10-04 at 10:42, Stathis Papaioannou wrote:
> Eric Cavalcanti writes:
> 
> QUOTE-
> And this is the case where this problem is most paradoxical.
> We are very likely to have one of the lanes more crowded than
> the other; most of the drivers reasoning would thus, by chance,
> be in the more crowded lane, such that they would benefit from
> changing lanes; even though, NO PARTICULAR DRIVER would benefit
> from changing lanes, on average. No particular driver has basis
> for infering in which lane he is. In this case you cannot reason
> as a random sample from the population.
> -ENDQUOTE
> 
> I find this paradox a little disturbing, on further reflection. You enter 
> the traffic by tossing a coin, so you are no more likely to end up in one 
> lane than the other, and you would not, on average, benefit from changing 
> lanes. Given that you are in every respect a typical driver, what applies to 
> you should apply to everyone else as well. This SHOULD be equivalent to 
> saying that if every driver decided to change lanes, on average no 
> particular driver would benefit - as Eric states. However, this is not so: 
> the majority of drivers WOULD benefit from changing. (The fact that nobody 
> would benefit if everyone changed does not resolve the paradox. We can 
> restrict the problem to the case where each driver individually changes, and 
> the paradox remains.) It seems that this problem is an assault on the 
> foundations of probability and statistics, and I would really like to see it 
> resolved.

I found the answer of why you should be more likely
to enter in the crowded lane in this case. The answer
came after I tried to think about an example for few
people (which turned out not to work as I thought it
would)

Suppose a coin is toss for N people, which enter one
of two rooms according to the result. Suppose first
N=3. Then it is more likely that I will be in
the crowded room, even though there was no particular
bias in each coin toss. But still, if I am given the
option to change, and if I am in the crowded room,
I'll probably still be in the crowded room after I
change!

Now as N grows large, it is still more likely that I
will be in the crowded room, only it is less so. I was
neglecting the effect that you make yourself when you
enter the room/lane.

When N is large and even, it is equally likely that the
lane I enter is slower/faster. But it may be that both
lanes have same numbers, so my entering will make that
lane be the slower, and that's where the effect comes
from.

If it is odd, and I enter the fast lane, it is possible
that they become equal. If I enter the slower lane, it
will become even slower.

A minute of thought shows that my changing lanes does not
affect the result, though, as much as changing rooms does
not make me more likely to be in the less crowded when
N=3.

Therefore it is not a good advice for people to change
lanes in this case, even though it is more likely that
they are in the slower lane!

Eric.




RE: Observation selection effects

2004-10-03 Thread "Hal Finney"
Eric Cavalcanti writes:
> Suppose, in the room problem, that instead of a biased coin,
> everyone tossed a fair coin, as in Stathis original problem, and
> enters a room by the decision of the coin. If the number of people
> is large enough, it is highly likely that one of the rooms will
> be more crowded. But as you enter one of the rooms, you have no
> reason to believe that you are in the more crowded room, **even
> though you followed the same mechanism as everyone else**, in
> constrast with Stathis original problem (where there were already
> 1000/10 people in each room).

Actually you do have reason to believe you are in the more crowded
room, because your presence there makes it more likely that room is
more crowded.  If the rooms happened to be a tie before you entered,
your room is now more crowded by virtue of your presence.

Although there is a 50-50 chance which room is more crowded, and a 50-50
chance which room you end up in, these two results are not independent.
The room you are in is (slightly) more likely to be the more crowded one.

If you imagine it as a betting game, where you want to know what the
odds are that you are in the more crowded room, then I think both of
these lines of reasoning, anthropic and probabilistic, will give the
same results.  The odds are slightly in favor of your room being the more
crowded, and as the number of people increases, the advantage decreases.

Hal Finney



RE: Observation selection effects

2004-10-03 Thread Stathis Papaioannou
Eric Cavalcanti writes:
QUOTE-
And this is the case where this problem is most paradoxical.
We are very likely to have one of the lanes more crowded than
the other; most of the drivers reasoning would thus, by chance,
be in the more crowded lane, such that they would benefit from
changing lanes; even though, NO PARTICULAR DRIVER would benefit
from changing lanes, on average. No particular driver has basis
for infering in which lane he is. In this case you cannot reason
as a random sample from the population.
-ENDQUOTE
I find this paradox a little disturbing, on further reflection. You enter 
the traffic by tossing a coin, so you are no more likely to end up in one 
lane than the other, and you would not, on average, benefit from changing 
lanes. Given that you are in every respect a typical driver, what applies to 
you should apply to everyone else as well. This SHOULD be equivalent to 
saying that if every driver decided to change lanes, on average no 
particular driver would benefit - as Eric states. However, this is not so: 
the majority of drivers WOULD benefit from changing. (The fact that nobody 
would benefit if everyone changed does not resolve the paradox. We can 
restrict the problem to the case where each driver individually changes, and 
the paradox remains.) It seems that this problem is an assault on the 
foundations of probability and statistics, and I would really like to see it 
resolved.

Stathis Papaioannou
_
FREE* Month of Movies with FOXTEL Digital:   
http://adsfac.net/link.asp?cc=FXT002.7542.0



RE: Observation selection effects

2004-10-03 Thread Eric Cavalcanti
On Mon, 2004-10-04 at 07:55, Eric Cavalcanti wrote: 
> On Sun, 2004-10-03 at 16:56, Stathis Papaioannou wrote:
> > Hal Finney writes:
> > 
> > > Stathis Papaioannou writes:
> > > >Here is another example which makes this point. You arrive before two
> > > >adjacent closed doors(...) However, this cannot be right,
> > > >because you tossed a coin, and you are thus
> > > >equally likely to find yourself in either room when the light goes on.

> > >Again the problem is that you are not a typical member of the room unless
> > >the mechanism you used to choose a room was the same as what everyone
> > >else did.  And your description is not consistent with that.
> > >This illustrates another problem with the lane-changing example, which
> > >is that the described mechanism for choosing lanes (choose at random)
> > >is not typical.  Most people don't flip a coin to choose the lane they
> > >will drive in.

I agree with both of these remarks. Of course that is not
the mechanism for choosing lanes, and that may be the
problem with the whole argument. As John M pointed out,
a highway is a very complex system, and we are treating
it in a model that might just not correspond to the real
thing.

On the other hand, trying to think of an idealized model
might not tell us anything about the specific problem we
are talking about, but can shed some light on a general
class of (at least gedanken) problems.

> > >Suppose we modify it so that you are handed a biased coin, a coin
> > >which 
> > >will come up heads or tails with 99% vs 1% probability.  You know about
> > >the bias but you don't know which way the bias is.  You flip the coin
> > >and walk into the room.  Now, I think you will agree that you have a
> > >good reason to expect that when you turn on the light, you will be in
> > >the more crowded room.  You are now a typical member of the room so the
> > >same considerations that make one room more crowded make it more likely
> > >that you are in that room.
In this case, I agree with the expectation you might have
in the end. 
> > Yes, this is correct. The "typical observer" must be typical in the way he 
> > makes the choice of room or lane. With the traffic example, given that there 
> > are slower and faster lanes on most roads, even in the absence of road works 
> > or accidents, this may mean that for whatever reason the typical driver on 
> > that day is more likely to choose the slower lane on entering the road. If 
> > this is so, then a winning strategy for getting to your destination faster 
> > could be to pick the lane with the most immediate appeal, then reflect on 
> > this (having participated in the present discussion) and choose a 
> > _different_ lane. This is analogous to counter-cyclical investing in the 
> > stock market, where you deliberately try to do the opposite of what the 
> > typical investor does.

I think I start to understand Bostrom's argument. In case we
don't know the mechanism that took us to the point we are;
and if it is reasonable to make the assumption that we just
followed the same mechanism as everyone else; and in case
this mechanism is biased towards one of the lanes;
THEN I can think of myself as a typical driver in the road,
such that it would be more likely for me to be at the slower
lane. 

The weak link of this argument is the third premise, though.
But provided that the mechanism is biased, and affects everyone
equally, there are grounds for Bostrom's reasoning.

> > But there may be a problem with the above argument. Suppose everyone really 
> > did flip a perfectly fair coin to decide which lane of traffic to enter. It 
> > is then still very most that one lane would be more crowded than the other 
> > at any given time, purely through chance. Now, every driver might reason, 
> > "everyone including me has flipped a coin to decide which lane to enter, so 
> > there is nothing to be gained by changing lanes". However, most of the 
> > drivers reasoning thus would, by chance, be in the more crowded lane, and 
> > therefore most would in fact be better off changing lanes.

In the road case, as Stathis points out above, it is possible
to make one of the lanes be more crowded than the other merely
by each driver randomly choosing a lane by chance. In fact,
it is very *unlikely* that both lanes would have the same speed
with such a pure random mechanism. 

But in this case, there is no basis for the third premise above.
Suppose, in the room problem, that instead of a biased coin,
everyone tossed a fair coin, as in Stathis original problem, and
enters a room by the decision of the coin. If the number of people
is large enough, it is highly likely that one of the rooms will
be more crowded. But as you enter one of the rooms, you have no
reason to believe that you are in the more crowded room, **even
though you followed the same mechanism as everyone else**, in
constrast with Stathis original problem (where there were already
1000/10 people in each room).

And this is the c

RE: Observation selection effects

2004-10-03 Thread Stathis Papaioannou
Hal Finney writes:
Stathis Papaioannou writes:
> Here is another example which makes this point. You arrive before two
> adjacent closed doors, A and B. You know that behind one door is a room
> containing 1000 people, while behind the other door is a room containing
> only 10 people, but you don't know which door is which. You toss a coin 
to
> decide which door you will open (heads=A, tails=B), and then enter into 
the
> corresponding room. The room is dark, so you don't know which room you 
are
> now in until you turn on the light. At the point just before the light 
goes
> on, do you have any reason to think you are more likely to be in one 
room
> rather than the other? By analogy with the Bostrom traffic lane example 
you
> could argue that, in the absence of any empirical data, you are much 
more
> likely to now be a member of the large population than the small 
population.
> However, this cannot be right, because you tossed a coin, and you are 
thus
> equally likely to find yourself in either room when the light goes on.

Again the problem is that you are not a typical member of the room unless
the mechanism you used to choose a room was the same as what everyone
else did.  And your description is not consistent with that.

This illustrates another problem with the lane-changing example, which
is that the described mechanism for choosing lanes (choose at random)
is not typical.  Most people don't flip a coin to choose the lane they
will drive in.
Yes, this is correct. The "typical observer" must be typical in the way he 
makes the choice of room or lane. With the traffic example, given that there 
are slower and faster lanes on most roads, even in the absence of road works 
or accidents, this may mean that for whatever reason the typical driver on 
that day is more likely to choose the slower lane on entering the road. If 
this is so, then a winning strategy for getting to your destination faster 
could be to pick the lane with the most immediate appeal, then reflect on 
this (having participated in the present discussion) and choose a 
_different_ lane. This is analogous to counter-cyclical investing in the 
stock market, where you deliberately try to do the opposite of what the 
typical investor does.

But there may be a problem with the above argument. Suppose everyone really 
did flip a perfectly fair coin to decide which lane of traffic to enter. It 
is then still very most that one lane would be more crowded than the other 
at any given time, purely through chance. Now, every driver might reason, 
"everyone including me has flipped a coin to decide which lane to enter, so 
there is nothing to be gained by changing lanes". However, most of the 
drivers reasoning thus would, by chance, be in the more crowded lane, and 
therefore most would in fact be better off changing lanes.

--Stathis Papaioannou
_
Protect yourself from junk e-mail:  
http://microsoft.ninemsn.com.au/protectfromspam.aspx



RE: Observation selection effects

2004-10-02 Thread "Hal Finney"
Stathis Papaioannou writes:
> Here is another example which makes this point. You arrive before two 
> adjacent closed doors, A and B. You know that behind one door is a room 
> containing 1000 people, while behind the other door is a room containing 
> only 10 people, but you don't know which door is which. You toss a coin to 
> decide which door you will open (heads=A, tails=B), and then enter into the 
> corresponding room. The room is dark, so you don't know which room you are 
> now in until you turn on the light. At the point just before the light goes 
> on, do you have any reason to think you are more likely to be in one room 
> rather than the other? By analogy with the Bostrom traffic lane example you 
> could argue that, in the absence of any empirical data, you are much more 
> likely to now be a member of the large population than the small population. 
> However, this cannot be right, because you tossed a coin, and you are thus 
> equally likely to find yourself in either room when the light goes on.

Again the problem is that you are not a typical member of the room unless
the mechanism you used to choose a room was the same as what everyone
else did.  And your description is not consistent with that.

Suppose we modify it so that you are handed a biased coin, a coin which
will come up heads or tails with 99% vs 1% probability.  You know about
the bias but you don't know which way the bias is.  You flip the coin
and walk into the room.  Now, I think you will agree that you have a
good reason to expect that when you turn on the light, you will be in
the more crowded room.  You are now a typical member of the room so the
same considerations that make one room more crowded make it more likely
that you are in that room.

This illustrates another problem with the lane-changing example, which
is that the described mechanism for choosing lanes (choose at random)
is not typical.  Most people don't flip a coin to choose the lane they
will drive in.  Instead, they have an expectation of which lane they will
start in based on their long experience of driving in various conditions.
It's pretty hard to think of yourself as a typical driver given the wide
range of personality, age and experience among drivers on the road.

Hal Finney



RE: Observation selection effects

2004-10-02 Thread Stathis Papaioannou

Eric Cavalcanti writes:
From another perspective, I have just arrived at the
road and there was no particular reason for me to
initially choose lane A or lane B, so that I could just
as well have started on the faster lane, and changing
would be undesirable. From this perspective, there
is no gain in changing lanes, on average.
Here is another example which makes this point. You arrive before two 
adjacent closed doors, A and B. You know that behind one door is a room 
containing 1000 people, while behind the other door is a room containing 
only 10 people, but you don't know which door is which. You toss a coin to 
decide which door you will open (heads=A, tails=B), and then enter into the 
corresponding room. The room is dark, so you don't know which room you are 
now in until you turn on the light. At the point just before the light goes 
on, do you have any reason to think you are more likely to be in one room 
rather than the other? By analogy with the Bostrom traffic lane example you 
could argue that, in the absence of any empirical data, you are much more 
likely to now be a member of the large population than the small population. 
However, this cannot be right, because you tossed a coin, and you are thus 
equally likely to find yourself in either room when the light goes on.

--Stathis Papaioannou
_
FREE pop-up blocking with the new MSN Toolbar – get it now! 
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/



Re: Observation selection effects

2004-09-30 Thread "Hal Finney"
Eric Cavalcanti writes regarding
http://plus.maths.org/issue17/features/traffic/index.html:

> I agree with the general conclusion:
> "when we randomly select a driver and ask her
> whether she thinks the next lane is faster, more
> often than not we will have selected a driver from
> the lane which is in fact slower and more densely
> packed."
> ...
> From another perspective, I have just arrived at the
> road and there was no particular reason for me to 
> initially choose lane A or lane B, so that I could just
> as well have started on the faster lane, and changing
> would be undesirable. From this perspective, there
> is no gain in changing lanes, on average.

That's a good question.  One thing I would note is that if everyone
entering the road chose between the two lanes with equal probability,
and stayed in their lane, then neither lane would be more crowded
than the other.  So to some extent your premises are contradictory.
If everyone behaved like this, one lane wouldn't be faster than the other.

> Extending the argument, suppose I drive for a couple
> of miles, and get to another point where I want to decide
> if I should change lanes. Since I had no reason to
> change lanes a couple of miles ago, I still have no reason
> to do so now. Unless, of course, I can clearly see that
> the next lane is faster, but adding that assumption changes
> the problem completely.

I think this is true as well, assuming you have not changed lanes yet.

Let's go on and suppose that you drive for a while and change lanes
occasionally based on how the traffic seems to be moving at that moment,
and that you are a typical driver in this regard.  Then your alarm clock
rings and you ask yourself, am I more likely to be in the more crowded
lane.  I think you will agree that in that case, the answer is yes.

Does this resolve the paradox?

Hal Finney