RE: Observation selection effects

2004-10-04 Thread Brent Meeker
>-Original Message-
>Norman Samish:
>
>>The "Flip-Flop" game described by Stathis Papaioannou
>strikes me as a
>>version of the old Two-Envelope Paradox.
>>
>>Assume an eccentric millionaire offers you your choice
>of either of two
>>sealed envelopes, A or B, both containing money.  One
>envelope contains
>>twice as much as the other.  After you choose an
>envelope you will have the
>>option of trading it for the other envelope.
>>
>>Suppose you pick envelope A.  You open it and see that
>it contains $100.
>>Now you have to decide if you will keep the $100, or
>will you trade it for
>>whatever is in envelope B?
>>
>>You might reason as follows: since one envelope has
>twice what the other
>>one
>>has, envelope B either has 200 dollars or 50 dollars, with equal
>>probability.  If you switch, you stand to either win
>$100 or to lose $50.
>>Since you stand to win more than you stand to lose, you
>should switch.
>>
>>But just before you tell the eccentric millionaire that
>you would like to
>>switch, another thought might occur to you.  If you had
>picked envelope B,
>>you would have come to exactly the same conclusion.  So
>if the above
>>argument is valid, you should switch no matter which
>envelope you choose.
>>
>>Therefore the argument for always switching is NOT
>valid - but I am unable,
>>at the moment, to tell you why!


Of course in the real world you have some idea about how much
money is in play so if you see a very large amount you infer it's
probably the larger amount.  But even without this assumption of
realism it's an interesting problem and taken as stated there's
still no paradox.  I saw this problem several years ago and here's
my solution.  It takes the problem as stated, but I do make one
small additional restrictive assumption:

Let:  s = envelope with smaller amount is selected.
  l = envelope with larger amount is selected.
  m = the amount in the selected envelope.

Since any valid resolution of the paradox would have to work for
ratios of money other than two, also define:

  r = the ratio of the larger amount to the smaller.

Now here comes the restrictive assumption, which can be thought of
as a restrictive rule about how the amounts are chosen which I
hope to generalize away later.  Expressed as a rule, it is this:

  The person putting in the money selects, at random (not
necessarily uniformly), the smaller amount from a range (x1, x2)
such that x2 < r*x1.  In other words, the range of possible
amounts is such that the larger and smaller amount do not overlap.
Then, for any interval of the range (x,x+dx) for the smaller
amount with probability p, there is a corresponding interval (r*x,
r*x+r*dx) with probability p for the larger amount.  Since the
latter interval is longer by a factor of r

 P(l|m)/P(s|m) = r ,

In other words, no matter what m is, it is r-times more likely to
fall in a large-amount interval than in a small-amount interval.

But since l and s are the only possibilities (and here's where I
need the non-overlap),

 P(1|m) + P(s|m) = 1

which implies,

 P(s|m) = 1/(1+r)  and P(1|m) = r/(1+r) .

Then the rest is straightforward algebra. The expected values are:

  E(don't switch) = m

  E(switch) = P(s|m)rm + P(l|m)m/r
= [1/(1+r)]rm + [r/(1+r)]m/r
= m

 and no paradox.

Brent Meeker



Re: Observation selection effects

2004-10-04 Thread Jesse Mazer
Norman Samish:
The "Flip-Flop" game described by Stathis Papaioannou strikes me as a
version of the old Two-Envelope Paradox.
Assume an eccentric millionaire offers you your choice of either of two
sealed envelopes, A or B, both containing money.  One envelope contains
twice as much as the other.  After you choose an envelope you will have the
option of trading it for the other envelope.
Suppose you pick envelope A.  You open it and see that it contains $100.
Now you have to decide if you will keep the $100, or will you trade it for
whatever is in envelope B?
You might reason as follows: since one envelope has twice what the other 
one
has, envelope B either has 200 dollars or 50 dollars, with equal
probability.  If you switch, you stand to either win $100 or to lose $50.
Since you stand to win more than you stand to lose, you should switch.

But just before you tell the eccentric millionaire that you would like to
switch, another thought might occur to you.  If you had picked envelope B,
you would have come to exactly the same conclusion.  So if the above
argument is valid, you should switch no matter which envelope you choose.
Therefore the argument for always switching is NOT valid - but I am unable,
at the moment, to tell you why!
Basically, I think the resolution of this paradox is that it's impossible to 
pick a number randomly from 0 to infinity in such a way that every number is 
equally likely to come up. Such an infinite flat probability distribution 
would lead to paradoxical conclusions--for example, if you picked two 
positive integers randomly from a flat probability distribution, and then 
looked at the first integer, then there would be a 100% chance the second 
integer would be larger, since there are only a finite number of integers 
smaller than or equal to the first one and an infinite number that are 
larger.

For any logically possible probability distribution the millionaire uses, it 
will be true that depending on what amount of money you find in the first 
envelope, there won't always be an equal chance of finding double the amount 
or half the amount in the other envelope. For example, if the millionaire 
simply picks a random amount from 0 to one million to put in the first 
envelope, and then flips a coin to decide whether to put half or double that 
in the other envelope, then if the first envelope contains more than one 
million there is a 100% chance the other envelope contains less than that.

For a more detailed discussion of the two-envelope paradox, see this page:
http://jamaica.u.arizona.edu/~chalmers/papers/envelope.html
I don't think the solution to this paradox has any relation to the solution 
to the flip-flop game, though. In the case of the flip-flop game, it may 
help to assume that the players are all robots, and that each player can 
assume that whatever decision it makes about whether to switch or not, there 
is a 100% chance that all the other players will follow the same line of 
reasoning and come to an identical decision. In this case, since the money 
is awarded to the minority flip, it's clear that it's better to switch, 
since if everyone switches more of them will win. This problem actually 
reminds me more of Newcomb's paradox, described at 
http://slate.msn.com/?id=2061419 , because it depends on whether you assume 
your choice is absolutely independent of choices made by other minds or if 
you should act as though the choice you make can "cause" another mind to 
make a certain choice even if there is no actual interaction between you.

Jesse



Re: Observation selection effects

2004-10-04 Thread Norman Samish
The "Flip-Flop" game described by Stathis Papaioannou strikes me as a 
version of the old Two-Envelope Paradox.

Assume an eccentric millionaire offers you your choice of either of two 
sealed envelopes, A or B, both containing money.  One envelope contains 
twice as much as the other.  After you choose an envelope you will have the 
option of trading it for the other envelope.

Suppose you pick envelope A.  You open it and see that it contains $100. 
Now you have to decide if you will keep the $100, or will you trade it for 
whatever is in envelope B?

You might reason as follows: since one envelope has twice what the other one 
has, envelope B either has 200 dollars or 50 dollars, with equal 
probability.  If you switch, you stand to either win $100 or to lose $50. 
Since you stand to win more than you stand to lose, you should switch.

But just before you tell the eccentric millionaire that you would like to 
switch, another thought might occur to you.  If you had picked envelope B, 
you would have come to exactly the same conclusion.  So if the above 
argument is valid, you should switch no matter which envelope you choose.

Therefore the argument for always switching is NOT valid - but I am unable, 
at the moment, to tell you why!

Norman Samish

- Original Message - 
From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, October 04, 2004 5:43 PM
Subject: RE: Observation selection effects


Here is another version of the paradox, where the way an individual chooses
does not change the initial probabilities:

In the new casino game Flip-Flop, an odd number of players pays $1 each to
individually flip a coin, so that no player can see what another player is
doing. The game organisers then tally up the results, and the result in the
minority is called the Winning Flip, while the majority result is called the
Losing Flip. Before the Winning Flip is announced, each player has the
opportunity to either keep their initial result, or to Switch; this is then
called the player's Final Flip. When the Winning Flip is announced, players
whose Final Flip corresponds with this are paid $2 by the casino, while the
rest are paid nothing.

The question: if you participate in this game, is there any advantage in
Switching? On the one hand, it seems clear that the Winning Flip is as
likely to be heads as tails, so if you played this game repeatedly, in the
long run you should break even, whether you Switch or not. On the other
hand, it seems equally clear that if all the players Switch, the casino will
end up every time paying out more than it collects, so Switching should be a
winning strategy, on average, for each individual player.

I'm sure there is something wrong with the above conclusion. What is it? And
I haven't really thought this through yet, but does this have any bearing on
the self sampling assumption as applied in the Doomsday Argument etc.?

Stathis Papaioannou 




RE: Observation selection effects

2004-10-04 Thread "Hal Finney"
Stathis Papaioannou writes:
> In the new casino game Flip-Flop, an odd number of players pays $1 each to 
> individually flip a coin, so that no player can see what another player is 
> doing. The game organisers then tally up the results, and the result in the 
> minority is called the Winning Flip, while the majority result is called the 
> Losing Flip. Before the Winning Flip is announced, each player has the 
> opportunity to either keep their initial result, or to Switch; this is then 
> called the player's Final Flip. When the Winning Flip is announced, players 
> whose Final Flip corresponds with this are paid $2 by the casino, while the 
> rest are paid nothing.

Think about if the odd number of players was exactly one.  You're guaranteed
to have the Winning Flip before you switch.

Then think about what would happen if the odd number of players was three.
Then you have a 3/4 chance of having the Winning Flip before you switch.
Only if the other two players' flips both disagree with yours will you not
have the Winnning Flip, and there is only a 1/4 chance of that happening.

Hal Finney



RE: Observation selection effects

2004-10-04 Thread Stathis Papaioannou
Here is another version of the paradox, where the way an individual chooses 
does not change the initial probabilities:

In the new casino game Flip-Flop, an odd number of players pays $1 each to 
individually flip a coin, so that no player can see what another player is 
doing. The game organisers then tally up the results, and the result in the 
minority is called the Winning Flip, while the majority result is called the 
Losing Flip. Before the Winning Flip is announced, each player has the 
opportunity to either keep their initial result, or to Switch; this is then 
called the player's Final Flip. When the Winning Flip is announced, players 
whose Final Flip corresponds with this are paid $2 by the casino, while the 
rest are paid nothing.

The question: if you participate in this game, is there any advantage in 
Switching? On the one hand, it seems clear that the Winning Flip is as 
likely to be heads as tails, so if you played this game repeatedly, in the 
long run you should break even, whether you Switch or not. On the other 
hand, it seems equally clear that if all the players Switch, the casino will 
end up every time paying out more than it collects, so Switching should be a 
winning strategy, on average, for each individual player.

I'm sure there is something wrong with the above conclusion. What is it? And 
I haven't really thought this through yet, but does this have any bearing on 
the self sampling assumption as applied in the Doomsday Argument etc.?

Stathis Papaioannou
_
Searching for that dream home? Try   http://ninemsn.realestate.com.au  for 
all your property needs.



Re: Use of Three-State Electronic Level to Express Belief

2004-10-04 Thread Bruno Marchal

At 11:59 29/09/04 -0700, George Levy wrote:

Bruno Marchal wrote:
Hi
George,  
[out-of-line message] 
 perhaps you could try to motivate your "qBp == If q then
p". 
I don't see the relation with "if q is 1 then p is known, and and if
q is 0 
then p is unknown". How do you manage the "known"
notion.Imagine a three port device such as an electrically
controlled switch. Let's say that this device has three
lines connected to it: an input connected to p, a control connected to q
and an output that we'll call qBp. 
If the control sets the switch to OFF (ie. q=0) , the output is not
connected to the input. Therefore for anyone observing the output, the
value of p is unknown, i.e., qBp  = x. The electronic value of x can
be any arbitrary value except 0 and 1 which are reserved for the possible
known binary values.
If the control sets the switch to ON (ie. q=1), the output is connected
to the input. Therefore for anyone observing the output, the value of p
is known. It is either 0 or 1 depending on what the input p
is.
Giving any logic L1, it is always interesting to look if
there is no other (perhaps better known) logic L2 such
that you can interpret L1 in L2. 
Now it can be shown that most modal logic cannot be
easily or directly represented by a multi-valued logic, so
I doubt your proposal could work.
You can always try, but let us be sure we agree on the
"intuitive" meaning of Bp, in the case of the Smullyan's
"self-
rererential" interpretation of the "B".
So we have a machine M.
The machine M print propositions from time to time.
Bp means that the machine M print p.
The machine could print Bp. In that case the machine
prints the proposition that she prints p.
A machine is self-referentially correct (SRC) if her use of
B is correct.
Examples:
The following machine (programs) is SRC:
Begin
print "hello"
print "B hello"
End
The following machine (programs) is not SRC:
Begin
print "B hello"
End
because the machine pretend that she print hello, but will never
do it.

OK?
Of course, we will add conditions; mainly that 
the machine' set of proposition will be closed for modus
ponens, i.e. that if she print (one day, soon or later) X, and
if she prints X->Y, then she will print Y.
Etc.
To sum up:   Bp means "M asserts p".
Bp is true (resp. false) if and only if M asserts p (resp. does not
assert p).
Bruno

http://iridia.ulb.ac.be/~marchal/