Re: Newcomb's Paradox

2014-12-15 Thread Telmo Menezes
Hi Jason,

Sorry for the delay !

On Thu, Dec 11, 2014 at 5:53 AM, Jason Resch jasonre...@gmail.com wrote:

 Telmo,

 Very creative solution! I think you may have been the first to out-smart
 the super-intelligence. Although would you risk $1,000,000 to gain the
 extra $1,000 on the belief that the super intelligence hasn't figured out a
 way to predict or account for collapse?  QM could always be wrong of
 course, or maybe the super intelligence knows we're in a simulation and has
 reverse engineered the state of the pseudorandom number generator used to
 give the appearance of collapse/splitting. :-)


Realistically, I would be a boring one boxer. Why risk one million for the
extra one thousand?

If I was convinced that the AI was that good, then I might risk it, more
out of curiosity than a desire to beat the AI. In the worst case I would
end up feeling like the K foundation:

http://en.wikipedia.org/wiki/K_Foundation_Burn_a_Million_Quid

Telmo.




 Jason


 On Wed, Dec 10, 2014 at 10:59 AM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Wed, Dec 10, 2014 at 9:55 AM, Jason Resch jasonre...@gmail.com
 wrote:

 I started quite a lively debate at work recently by bringing up
 Newcomb's Paradox. We debated topics ranging from the prisoner's dilemma to
 the halting problem, from free will to retro causality, from first person
 indeterminacy to Godel's incompleteness.

 My colleagues were about evenly split between one-boxing and two-boxing,
 and I was curious if there would be any more consensus among the members of
 this list. If you're unfamiliar with the problem there are descriptions
 here:

 http://www.philosophyexperiments.com/newcomb/
 http://en.wikipedia.org/wiki/Newcomb%27s_paradox

 If you reach a decision, please reply with whether your strategy would
 be to take one box or two, what assumptions you make, and why you think
 your strategy is best. I don't want to bias the results so I'll provide my
 answer in a follow-up post.


 Employ a quantum noise source to generate a random decision. With it,
 generate a very slightly unbalanced coin flip. Use it to decide on one box
 vs. two boxes. Give one box a very slight advantage. The only rational
 choice for the oracle is to bet on one box. You get 1 million with a
 probability of 0.5 or the full 1.01 million with a probability of
 0.4.

 Telmo.




 Jason

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-12 Thread Stathis Papaioannou
On Friday, December 12, 2014, meekerdb meeke...@verizon.net wrote:

 On 12/11/2014 5:49 PM, Stathis Papaioannou wrote:

 On 12 December 2014 at 12:22, Jason Resch jasonre...@gmail.com wrote:


 On Thu, Dec 11, 2014 at 3:10 PM, LizR lizj...@gmail.com wrote:

 On 11 December 2014 at 18:59, Stathis Papaioannou stath...@gmail.com
 wrote:


 On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:

 Maybe it's a delayed choice experiment and retroactively collapses the
 wave function, so your choice actually does determine the contents of
 the
 boxes.

 (Just a thought...maybe the second box has a cat in it...)

  No such trickery is required. Consider the experiment where the
 subject
 is a computer program and the clairvoyant is you, with the program's
 source
 code and inputs. You will always know exactly what the program will do
 by
 running it, including all its deliberations. If it is the sort of
 program
 that decides to choose both boxes it will lose the million dollars. The
 question of whether it *ought to* choose both boxes or one is
 meaningless if
 it is a deterministic program, and the paradox arises from failing to
 understand this.

 Not trickery, how dare you?! An attempt to give a meaningful answer
 which
 actually makes something worthwhile from what appears to be a trivial
 paradox without any real teeth.

 But OK since you are determined to belittle my efforts, let's try your
 approach.

 1 wait 10 seconds
 2 print after careful consideration, I have decided to open both boxes
 3 stop

 This is what ANY deterministic computer programme (with no added random
 inputs) would boil down to, although millions of lines of code might
 take a
 while to analyse, and the simplest way to find out the answer in
 practice
 might be to run it (but each run would give the same result, so once
 it's
 been run once we can replace it with my simpler version).

 I have to admit I can't see where the paradox is, or why there is any
 interest in discussing it.

  It's probably not a true paradox, but why it seems like one is that
 depending on which version of decision theory you use, you can be led to
 two
 opposite conclusions. About half of people think one-boxing is best, and
 the
 other half think two-boxing is best, and more often then not, people from
 either side think people on the other side are idiots. However, for
 whatever
 reason, everyone on this list seems to agree one-boxing is best, so you
 are
 missing out on the interesting discussions that can arise from seeing
 people
 justify their alternate decision.

 Often two-boxers will say: the predictor's already made his decision,
 what
 you decide now can't change the past or alter what's already been done.
 So
 you're just leaving money on the table by not taking both boxes. An
 interesting twist one two-boxer told me was: what would you do if both
 boxes
 were transparent, and how does that additional information change what
 the
 best choice is?

 If both boxes were transparent, that would screw up the oracle's
 ability to make the prediction, since there would be a feedback from
 the oracle's attempt at prediction to the subject. The oracle can
 predict if I'm going to pick head or tails, but the oracle *can't*
 predict if I'm going to pick heads or tails if he tells me his
 prediction then waits for me to make a decision.

 Why not?  If the oracle has a complete and accurate simulation of you then
 he can predict your response to what he tells you - it's just that what he
 told you may then no longer be truthful.  Suppose he tells you he predicts
 you'll pick tails, but he actually predicts that after hearing this you'll
 pick heads.  It just means he lied. It's somewhat like the unexpected
 hanging problem, if you reason from the premises given and you reach a
 contradiction then the contradiction was implicit in the premises and no
 valid conclusion follows from them.


If the oracle lies then he controls the inputs and can still make a
prediction. The difficulty arises when he tells the truth, which is
effectively what happens with transparent boxes. I think then the answer
for the oracle is then indeterminate since you can always do the opposite
of what he tells you; although a super-oracle outside the system might
still be able to predict what will actually happen.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-12 Thread Bruno Marchal


On 12 Dec 2014, at 02:22, Jason Resch wrote:




On Thu, Dec 11, 2014 at 3:10 PM, LizR lizj...@gmail.com wrote:
On 11 December 2014 at 18:59, Stathis Papaioannou  
stath...@gmail.com wrote:


On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:
Maybe it's a delayed choice experiment and retroactively collapses  
the wave function, so your choice actually does determine the  
contents of the boxes.


(Just a thought...maybe the second box has a cat in it...)

No such trickery is required. Consider the experiment where the  
subject is a computer program and the clairvoyant is you, with the  
program's source code and inputs. You will always know exactly what  
the program will do by running it, including all its deliberations.  
If it is the sort of program that decides to choose both boxes it  
will lose the million dollars. The question of whether it *ought to*  
choose both boxes or one is meaningless if it is a deterministic  
program, and the paradox arises from failing to understand this.


Not trickery, how dare you?! An attempt to give a meaningful answer  
which actually makes something worthwhile from what appears to be a  
trivial paradox without any real teeth.


But OK since you are determined to belittle my efforts, let's try  
your approach.


1 wait 10 seconds
2 print after careful consideration, I have decided to open both  
boxes

3 stop

This is what ANY deterministic computer programme (with no added  
random inputs) would boil down to, although millions of lines of  
code might take a while to analyse, and the simplest way to find out  
the answer in practice might be to run it (but each run would give  
the same result, so once it's been run once we can replace it with  
my simpler version).


I have to admit I can't see where the paradox is, or why there is  
any interest in discussing it.



It's probably not a true paradox, but why it seems like one is that  
depending on which version of decision theory you use, you can be  
led to two opposite conclusions.



Yes, I think that initially it was a test to see if you believe in  
free-will. here I add quotes for reason which I explain later.


The idea is that those who take the two boxes believe in free-will,  
because they believe that if they decide to take only the box B, then  
there is money in the two boxes, and they leaves money on the table  
for nothing. They believe that somehow they can fool the predictor  
just because if it was correct, then the two boxes are full of money,  
so why not taken them both!


But in this list I guess most people have no problem with determinism,  
and conceive some predictor, outside of the box, and capable to take  
into account the reasoning above. So we take one box.





About half of people think one-boxing is best, and the other half  
think two-boxing is best, and more often then not, people from  
either side think people on the other side are idiots.


It usually mirror well the believer in free-will, and indeed in the  
sense that John Clark mocks a lot, and here I agree with him. That  
notion of free-will negates determinacy, and consider that our  
decision are not predictable.


That notion of free-will can be refuted. It really makes no sense. But  
the weaker notion of free-will, which is that that our decision are  
not predictable *by ourself*, still make sense. If we could find a way  
to predict ourselves, we would be cured against hesitation and doubt,  
but computationalism entails, by computer science, that there is no  
complete cure or vaccination against doubt and hesitation. Even  
inconsistency does not prevent you from doubting, only unsoundness  
(insanity) can, in theory.





However, for whatever reason, everyone on this list seems to agree  
one-boxing is best, so you are missing out on the interesting  
discussions that can arise from seeing people justify their  
alternate decision.


You might try to see if what I say above is corroborated. Believer in  
strong (say) free-will choose two-boxing, and the non believer (like  
most people here I guess) in that strong free-will choose the one- 
boxing.






Often two-boxers will say: the predictor's already made his  
decision, what you decide now can't change the past or alter what's  
already been done. So you're just leaving money on the table by not  
taking both boxes. An interesting twist one two-boxer told me was:  
what would you do if both boxes were transparent, and how does that  
additional information change what the best choice is?


You are right with the analogy that it is a form of the surprise  
examination paradox. In this case the teacher says just today I will  
do a surprise examination.


The more you take the teacher seriously, the more you make him  
inconsistent, even insane if you push a bit.


It is a bit like  I predict that tomorrow you will act in in a manner  
to make this prediction wrong.


Smullyan is correct in seeing a relationship between the examination  

Re: Newcomb's Paradox

2014-12-11 Thread Bruno Marchal


On 10 Dec 2014, at 21:10, Stathis Papaioannou wrote:




On Thursday, December 11, 2014, Terren Suydam  
terren.suy...@gmail.com wrote:
Same here, just one box. The paradox hinges on clairvoyance and how  
we could expect that to be sensible in the universe we live in. To  
my way of thinking, clairvoyance entails a sort of backwards- 
causation which I think can be made sensible in a multiverse. To  
wit, you make your choice (one box, say), and that collapses the  
possible universes you are in to the one in which the clairvoyant  
predicted you would choose one box, and so you get the money.


In other words, the justification for choosing both boxes - that the  
contents of the boxes have already been determined - fails to  
provide an account of clairvoyance that can be made sensible. Or  
rather, I just can't think of one.


Terren

Clairvoyance, as you call it, is not logically problematic. What is  
logically problematic is free will. The paradox seems to be such  
because people believe that their decisions are neither determined  
nor random, which is nonsense.


Free will is not problematic to me but I have never believed the idea  
that free will is related with some non-determinacy, which is the real  
non sense here imo. Free will needs only a high level local  
indeterminacy, which indeed all introspective machine have. Not a  
genuine nomological indeterminacy which in my opinion is non sensical.


Bruno





--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread LizR
On 11 December 2014 at 18:59, Stathis Papaioannou stath...@gmail.com
wrote:


 On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:

 Maybe it's a delayed choice experiment and retroactively collapses the
 wave function, so your choice actually *does* determine the contents of
 the boxes.

 (Just a thought...maybe the second box has a cat in it...)

 No such trickery is required. Consider the experiment where the subject
 is a computer program and the clairvoyant is you, with the program's source
 code and inputs. You will always know exactly what the program will do by
 running it, including all its deliberations. If it is the sort of program
 that decides to choose both boxes it will lose the million dollars. The
 question of whether it *ought to* choose both boxes or one is meaningless
 if it is a deterministic program, and the paradox arises from failing to
 understand this.

 Not trickery, how dare you?! An attempt to give a meaningful answer which
actually makes something worthwhile from what appears to be a trivial
paradox without any real teeth.

But OK since you are determined to belittle my efforts, let's try your
approach.

1 wait 10 seconds
2 print after careful consideration, I have decided to open both boxes
3 stop

This is what ANY deterministic computer programme (with no added random
inputs) would boil down to, although millions of lines of code might take a
while to analyse, and the simplest way to find out the answer in practice
might be to run it (but each run would give the same result, so once it's
been run once we can replace it with my simpler version).

I have to admit I can't see where the paradox is, or why there is any
interest in discussing it.

My point, in case it wasn't clear, was that there is only a possible
paradox (or at least something worth discussing) if the oracle is
unreliable. But NP is stated in terms of a deterministic universe and an
oracle that can foresee the future 100% accurately, so the only
*possibility* of a paradox arises if it can be turned into a grandfather
paradox via time-travel, which is what I suggested.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread Stathis Papaioannou
On 12 December 2014 at 08:10, LizR lizj...@gmail.com wrote:
 On 11 December 2014 at 18:59, Stathis Papaioannou stath...@gmail.com
 wrote:


 On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:

 Maybe it's a delayed choice experiment and retroactively collapses the
 wave function, so your choice actually does determine the contents of the
 boxes.

 (Just a thought...maybe the second box has a cat in it...)

 No such trickery is required. Consider the experiment where the subject is
 a computer program and the clairvoyant is you, with the program's source
 code and inputs. You will always know exactly what the program will do by
 running it, including all its deliberations. If it is the sort of program
 that decides to choose both boxes it will lose the million dollars. The
 question of whether it *ought to* choose both boxes or one is meaningless if
 it is a deterministic program, and the paradox arises from failing to
 understand this.

 Not trickery, how dare you?! An attempt to give a meaningful answer which
 actually makes something worthwhile from what appears to be a trivial
 paradox without any real teeth.

 But OK since you are determined to belittle my efforts, let's try your
 approach.

 1 wait 10 seconds
 2 print after careful consideration, I have decided to open both boxes
 3 stop

 This is what ANY deterministic computer programme (with no added random
 inputs) would boil down to, although millions of lines of code might take a
 while to analyse, and the simplest way to find out the answer in practice
 might be to run it (but each run would give the same result, so once it's
 been run once we can replace it with my simpler version).

 I have to admit I can't see where the paradox is, or why there is any
 interest in discussing it.

 My point, in case it wasn't clear, was that there is only a possible paradox
 (or at least something worth discussing) if the oracle is unreliable. But NP
 is stated in terms of a deterministic universe and an oracle that can
 foresee the future 100% accurately, so the only possibility of a paradox
 arises if it can be turned into a grandfather paradox via time-travel,
 which is what I suggested.

I didn't mean to offend. I meant that no clever explanation is needed
because, in my view, it isn't really paradoxical - which (in the
deterministic case, which is how NP is usually stated) you seem to
agree with.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread LizR
On 12 December 2014 at 13:47, Stathis Papaioannou stath...@gmail.com
wrote:

 I didn't mean to offend. I meant that no clever explanation is needed
 because, in my view, it isn't really paradoxical - which (in the
 deterministic case, which is how NP is usually stated) you seem to
 agree with.


Sorry. Yes.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread Jason Resch
On Thu, Dec 11, 2014 at 3:10 PM, LizR lizj...@gmail.com wrote:

 On 11 December 2014 at 18:59, Stathis Papaioannou stath...@gmail.com
 wrote:


 On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:

 Maybe it's a delayed choice experiment and retroactively collapses the
 wave function, so your choice actually *does* determine the contents of
 the boxes.

 (Just a thought...maybe the second box has a cat in it...)

 No such trickery is required. Consider the experiment where the subject
 is a computer program and the clairvoyant is you, with the program's source
 code and inputs. You will always know exactly what the program will do by
 running it, including all its deliberations. If it is the sort of program
 that decides to choose both boxes it will lose the million dollars. The
 question of whether it *ought to* choose both boxes or one is meaningless
 if it is a deterministic program, and the paradox arises from failing to
 understand this.

 Not trickery, how dare you?! An attempt to give a meaningful answer which
 actually makes something worthwhile from what appears to be a trivial
 paradox without any real teeth.

 But OK since you are determined to belittle my efforts, let's try your
 approach.

 1 wait 10 seconds
 2 print after careful consideration, I have decided to open both boxes
 3 stop

 This is what ANY deterministic computer programme (with no added random
 inputs) would boil down to, although millions of lines of code might take a
 while to analyse, and the simplest way to find out the answer in practice
 might be to run it (but each run would give the same result, so once it's
 been run once we can replace it with my simpler version).

 I have to admit I can't see where the paradox is, or why there is any
 interest in discussing it.


It's probably not a true paradox, but why it seems like one is that
depending on which version of decision theory you use, you can be led to
two opposite conclusions. About half of people think one-boxing is best,
and the other half think two-boxing is best, and more often then not,
people from either side think people on the other side are idiots. However,
for whatever reason, everyone on this list seems to agree one-boxing is
best, so you are missing out on the interesting discussions that can arise
from seeing people justify their alternate decision.

Often two-boxers will say: the predictor's already made his decision, what
you decide now can't change the past or alter what's already been done. So
you're just leaving money on the table by not taking both boxes. An
interesting twist one two-boxer told me was: what would you do if both
boxes were transparent, and how does that additional information change
what the best choice is?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread LizR
If the world is deterministic and the oracle is always right then one
boxing is clearly the better choice (assuming you just want to maximise
your return), because if you take both then determinism and correctness
will mean you have to lose, by definition of those terms.

I'll have a think about the transparent boxes...

On 12 December 2014 at 14:22, Jason Resch jasonre...@gmail.com wrote:



 On Thu, Dec 11, 2014 at 3:10 PM, LizR lizj...@gmail.com wrote:

 On 11 December 2014 at 18:59, Stathis Papaioannou stath...@gmail.com
 wrote:


 On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:

 Maybe it's a delayed choice experiment and retroactively collapses the
 wave function, so your choice actually *does* determine the contents
 of the boxes.

 (Just a thought...maybe the second box has a cat in it...)

 No such trickery is required. Consider the experiment where the subject
 is a computer program and the clairvoyant is you, with the program's source
 code and inputs. You will always know exactly what the program will do by
 running it, including all its deliberations. If it is the sort of program
 that decides to choose both boxes it will lose the million dollars. The
 question of whether it *ought to* choose both boxes or one is meaningless
 if it is a deterministic program, and the paradox arises from failing to
 understand this.

 Not trickery, how dare you?! An attempt to give a meaningful answer
 which actually makes something worthwhile from what appears to be a trivial
 paradox without any real teeth.

 But OK since you are determined to belittle my efforts, let's try your
 approach.

 1 wait 10 seconds
 2 print after careful consideration, I have decided to open both boxes
 3 stop

 This is what ANY deterministic computer programme (with no added random
 inputs) would boil down to, although millions of lines of code might take a
 while to analyse, and the simplest way to find out the answer in practice
 might be to run it (but each run would give the same result, so once it's
 been run once we can replace it with my simpler version).

 I have to admit I can't see where the paradox is, or why there is any
 interest in discussing it.


 It's probably not a true paradox, but why it seems like one is that
 depending on which version of decision theory you use, you can be led to
 two opposite conclusions. About half of people think one-boxing is best,
 and the other half think two-boxing is best, and more often then not,
 people from either side think people on the other side are idiots. However,
 for whatever reason, everyone on this list seems to agree one-boxing is
 best, so you are missing out on the interesting discussions that can arise
 from seeing people justify their alternate decision.

 Often two-boxers will say: the predictor's already made his decision, what
 you decide now can't change the past or alter what's already been done. So
 you're just leaving money on the table by not taking both boxes. An
 interesting twist one two-boxer told me was: what would you do if both
 boxes were transparent, and how does that additional information change
 what the best choice is?

 Jason

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread Stathis Papaioannou
On 12 December 2014 at 12:22, Jason Resch jasonre...@gmail.com wrote:


 On Thu, Dec 11, 2014 at 3:10 PM, LizR lizj...@gmail.com wrote:

 On 11 December 2014 at 18:59, Stathis Papaioannou stath...@gmail.com
 wrote:


 On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:

 Maybe it's a delayed choice experiment and retroactively collapses the
 wave function, so your choice actually does determine the contents of the
 boxes.

 (Just a thought...maybe the second box has a cat in it...)

 No such trickery is required. Consider the experiment where the subject
 is a computer program and the clairvoyant is you, with the program's source
 code and inputs. You will always know exactly what the program will do by
 running it, including all its deliberations. If it is the sort of program
 that decides to choose both boxes it will lose the million dollars. The
 question of whether it *ought to* choose both boxes or one is meaningless if
 it is a deterministic program, and the paradox arises from failing to
 understand this.

 Not trickery, how dare you?! An attempt to give a meaningful answer which
 actually makes something worthwhile from what appears to be a trivial
 paradox without any real teeth.

 But OK since you are determined to belittle my efforts, let's try your
 approach.

 1 wait 10 seconds
 2 print after careful consideration, I have decided to open both boxes
 3 stop

 This is what ANY deterministic computer programme (with no added random
 inputs) would boil down to, although millions of lines of code might take a
 while to analyse, and the simplest way to find out the answer in practice
 might be to run it (but each run would give the same result, so once it's
 been run once we can replace it with my simpler version).

 I have to admit I can't see where the paradox is, or why there is any
 interest in discussing it.


 It's probably not a true paradox, but why it seems like one is that
 depending on which version of decision theory you use, you can be led to two
 opposite conclusions. About half of people think one-boxing is best, and the
 other half think two-boxing is best, and more often then not, people from
 either side think people on the other side are idiots. However, for whatever
 reason, everyone on this list seems to agree one-boxing is best, so you are
 missing out on the interesting discussions that can arise from seeing people
 justify their alternate decision.

 Often two-boxers will say: the predictor's already made his decision, what
 you decide now can't change the past or alter what's already been done. So
 you're just leaving money on the table by not taking both boxes. An
 interesting twist one two-boxer told me was: what would you do if both boxes
 were transparent, and how does that additional information change what the
 best choice is?

If both boxes were transparent, that would screw up the oracle's
ability to make the prediction, since there would be a feedback from
the oracle's attempt at prediction to the subject. The oracle can
predict if I'm going to pick head or tails, but the oracle *can't*
predict if I'm going to pick heads or tails if he tells me his
prediction then waits for me to make a decision.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread Jason Resch
On Thu, Dec 11, 2014 at 7:49 PM, Stathis Papaioannou stath...@gmail.com
wrote:

 On 12 December 2014 at 12:22, Jason Resch jasonre...@gmail.com wrote:
 
 
  On Thu, Dec 11, 2014 at 3:10 PM, LizR lizj...@gmail.com wrote:
 
  On 11 December 2014 at 18:59, Stathis Papaioannou stath...@gmail.com
  wrote:
 
 
  On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:
 
  Maybe it's a delayed choice experiment and retroactively collapses the
  wave function, so your choice actually does determine the contents of
 the
  boxes.
 
  (Just a thought...maybe the second box has a cat in it...)
 
  No such trickery is required. Consider the experiment where the subject
  is a computer program and the clairvoyant is you, with the program's
 source
  code and inputs. You will always know exactly what the program will do
 by
  running it, including all its deliberations. If it is the sort of
 program
  that decides to choose both boxes it will lose the million dollars. The
  question of whether it *ought to* choose both boxes or one is
 meaningless if
  it is a deterministic program, and the paradox arises from failing to
  understand this.
 
  Not trickery, how dare you?! An attempt to give a meaningful answer
 which
  actually makes something worthwhile from what appears to be a trivial
  paradox without any real teeth.
 
  But OK since you are determined to belittle my efforts, let's try your
  approach.
 
  1 wait 10 seconds
  2 print after careful consideration, I have decided to open both boxes
  3 stop
 
  This is what ANY deterministic computer programme (with no added random
  inputs) would boil down to, although millions of lines of code might
 take a
  while to analyse, and the simplest way to find out the answer in
 practice
  might be to run it (but each run would give the same result, so once
 it's
  been run once we can replace it with my simpler version).
 
  I have to admit I can't see where the paradox is, or why there is any
  interest in discussing it.
 
 
  It's probably not a true paradox, but why it seems like one is that
  depending on which version of decision theory you use, you can be led to
 two
  opposite conclusions. About half of people think one-boxing is best, and
 the
  other half think two-boxing is best, and more often then not, people from
  either side think people on the other side are idiots. However, for
 whatever
  reason, everyone on this list seems to agree one-boxing is best, so you
 are
  missing out on the interesting discussions that can arise from seeing
 people
  justify their alternate decision.
 
  Often two-boxers will say: the predictor's already made his decision,
 what
  you decide now can't change the past or alter what's already been done.
 So
  you're just leaving money on the table by not taking both boxes. An
  interesting twist one two-boxer told me was: what would you do if both
 boxes
  were transparent, and how does that additional information change what
 the
  best choice is?

 If both boxes were transparent, that would screw up the oracle's
 ability to make the prediction, since there would be a feedback from
 the oracle's attempt at prediction to the subject. The oracle can
 predict if I'm going to pick head or tails, but the oracle *can't*
 predict if I'm going to pick heads or tails if he tells me his
 prediction then waits for me to make a decision.


Right that's what I told him. :-)

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread LizR
The two transparent boxes can have two possible outcomes. One is that the
player only takes box B, which contains $1 million (I believe is the agreed
amount). The other is that the player sees the million dollars, takes both
boxes, and proves that the oracle is not infallible,

The problem with N's problem is that it isn't well enough defined to be a
problem!

Without stipulating the nature of the oracle, the nature of reality, and so
on, there is no way to evaluate the situation. Why is the oracle thought to
be infallible? How could it possibly know in advance what someone will do?
Are we living in a multiverse in which all decisions get made anyway? Etc.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread LizR
On 12 December 2014 at 14:49, Stathis Papaioannou stath...@gmail.com
wrote:


 If both boxes were transparent, that would screw up the oracle's
 ability to make the prediction, since there would be a feedback from
 the oracle's attempt at prediction to the subject. The oracle can
 predict if I'm going to pick head or tails, but the oracle *can't*
 predict if I'm going to pick heads or tails if he tells me his
 prediction then waits for me to make a decision.


Well, yes, it's basically another version of my suggestion about
retroactive collapse of the wavefunction, which was also an attempt to feed
back information from the time when the decision is made to the point at
which the prediction was made. So I stand by my original point - this can
only be a paradox if it's turned into a Grandfather paradox - i.e. if in
some way there is a self-negating temporal loop from the making of the
decision to the making of the prediction (obviously making the boxes
transparent means there's no need for actual time travel, but it's the same
feedback principle - admittedly rather more simple to arrange!)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread Jason Resch
On Thu, Dec 11, 2014 at 8:55 PM, LizR lizj...@gmail.com wrote:

 The two transparent boxes can have two possible outcomes. One is that the
 player only takes box B, which contains $1 million (I believe is the agreed
 amount). The other is that the player sees the million dollars, takes both
 boxes, and proves that the oracle is not infallible,

 The problem with N's problem is that it isn't well enough defined to be a
 problem!

 Without stipulating the nature of the oracle, the nature of reality, and
 so on, there is no way to evaluate the situation. Why is the oracle thought
 to be infallible? How could it possibly know in advance what someone will
 do? Are we living in a multiverse in which all decisions get made anyway?
 Etc.



Oddly, philosophy undergraduates lean towards one box, while philsophy
professors favor two boxes:

http://lesswrong.com/lw/hqs/why_do_theists_undergrads_and_less_wrongers_favor/

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-11 Thread meekerdb

On 12/11/2014 5:49 PM, Stathis Papaioannou wrote:

On 12 December 2014 at 12:22, Jason Resch jasonre...@gmail.com wrote:


On Thu, Dec 11, 2014 at 3:10 PM, LizR lizj...@gmail.com wrote:

On 11 December 2014 at 18:59, Stathis Papaioannou stath...@gmail.com
wrote:


On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:

Maybe it's a delayed choice experiment and retroactively collapses the
wave function, so your choice actually does determine the contents of the
boxes.

(Just a thought...maybe the second box has a cat in it...)


No such trickery is required. Consider the experiment where the subject
is a computer program and the clairvoyant is you, with the program's source
code and inputs. You will always know exactly what the program will do by
running it, including all its deliberations. If it is the sort of program
that decides to choose both boxes it will lose the million dollars. The
question of whether it *ought to* choose both boxes or one is meaningless if
it is a deterministic program, and the paradox arises from failing to
understand this.

Not trickery, how dare you?! An attempt to give a meaningful answer which
actually makes something worthwhile from what appears to be a trivial
paradox without any real teeth.

But OK since you are determined to belittle my efforts, let's try your
approach.

1 wait 10 seconds
2 print after careful consideration, I have decided to open both boxes
3 stop

This is what ANY deterministic computer programme (with no added random
inputs) would boil down to, although millions of lines of code might take a
while to analyse, and the simplest way to find out the answer in practice
might be to run it (but each run would give the same result, so once it's
been run once we can replace it with my simpler version).

I have to admit I can't see where the paradox is, or why there is any
interest in discussing it.


It's probably not a true paradox, but why it seems like one is that
depending on which version of decision theory you use, you can be led to two
opposite conclusions. About half of people think one-boxing is best, and the
other half think two-boxing is best, and more often then not, people from
either side think people on the other side are idiots. However, for whatever
reason, everyone on this list seems to agree one-boxing is best, so you are
missing out on the interesting discussions that can arise from seeing people
justify their alternate decision.

Often two-boxers will say: the predictor's already made his decision, what
you decide now can't change the past or alter what's already been done. So
you're just leaving money on the table by not taking both boxes. An
interesting twist one two-boxer told me was: what would you do if both boxes
were transparent, and how does that additional information change what the
best choice is?

If both boxes were transparent, that would screw up the oracle's
ability to make the prediction, since there would be a feedback from
the oracle's attempt at prediction to the subject. The oracle can
predict if I'm going to pick head or tails, but the oracle *can't*
predict if I'm going to pick heads or tails if he tells me his
prediction then waits for me to make a decision.
Why not?  If the oracle has a complete and accurate simulation of you then he can predict 
your response to what he tells you - it's just that what he told you may then no longer be 
truthful.  Suppose he tells you he predicts you'll pick tails, but he actually predicts 
that after hearing this you'll pick heads.  It just means he lied. It's somewhat like the 
unexpected hanging problem, if you reason from the premises given and you reach a 
contradiction then the contradiction was implicit in the premises and no valid conclusion 
follows from them.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Newcomb's Paradox

2014-12-10 Thread Jason Resch
I started quite a lively debate at work recently by bringing up Newcomb's
Paradox. We debated topics ranging from the prisoner's dilemma to the
halting problem, from free will to retro causality, from first person
indeterminacy to Godel's incompleteness.

My colleagues were about evenly split between one-boxing and two-boxing,
and I was curious if there would be any more consensus among the members of
this list. If you're unfamiliar with the problem there are descriptions
here:

http://www.philosophyexperiments.com/newcomb/
http://en.wikipedia.org/wiki/Newcomb%27s_paradox

If you reach a decision, please reply with whether your strategy would be
to take one box or two, what assumptions you make, and why you think your
strategy is best. I don't want to bias the results so I'll provide my
answer in a follow-up post.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Bruno Marchal


On 10 Dec 2014, at 09:55, Jason Resch wrote:

I started quite a lively debate at work recently by bringing up  
Newcomb's Paradox. We debated topics ranging from the prisoner's  
dilemma to the halting problem, from free will to retro causality,  
from first person indeterminacy to Godel's incompleteness.


My colleagues were about evenly split between one-boxing and two- 
boxing, and I was curious if there would be any more consensus among  
the members of this list. If you're unfamiliar with the problem  
there are descriptions here:


http://www.philosophyexperiments.com/newcomb/
http://en.wikipedia.org/wiki/Newcomb%27s_paradox

If you reach a decision, please reply with whether your strategy  
would be to take one box or two, what assumptions you make, and why  
you think your strategy is best. I don't want to bias the results so  
I'll provide my answer in a follow-up post.


I take only one box. Non randomly! I use my free-will ...
To be sure and make thing simpler, I assume the predictor is 100%  
accurate.


Bruno





Jason

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Terren Suydam
Same here, just one box. The paradox hinges on clairvoyance and how we
could expect that to be sensible in the universe we live in. To my way of
thinking, clairvoyance entails a sort of backwards-causation which I think
can be made sensible in a multiverse. To wit, you make your choice (one
box, say), and that collapses the possible universes you are in to the
one in which the clairvoyant predicted you would choose one box, and so you
get the money.

In other words, the justification for choosing both boxes - that the
contents of the boxes have already been determined - fails to provide an
account of clairvoyance that can be made sensible. Or rather, I just can't
think of one.

Terren

On Wed, Dec 10, 2014 at 5:13 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 10 Dec 2014, at 09:55, Jason Resch wrote:

 I started quite a lively debate at work recently by bringing up Newcomb's
 Paradox. We debated topics ranging from the prisoner's dilemma to the
 halting problem, from free will to retro causality, from first person
 indeterminacy to Godel's incompleteness.

 My colleagues were about evenly split between one-boxing and two-boxing,
 and I was curious if there would be any more consensus among the members of
 this list. If you're unfamiliar with the problem there are descriptions
 here:

 http://www.philosophyexperiments.com/newcomb/
 http://en.wikipedia.org/wiki/Newcomb%27s_paradox

 If you reach a decision, please reply with whether your strategy would be
 to take one box or two, what assumptions you make, and why you think your
 strategy is best. I don't want to bias the results so I'll provide my
 answer in a follow-up post.


 I take only one box. Non randomly! I use my free-will ...
 To be sure and make thing simpler, I assume the predictor is 100% accurate.

 Bruno




 Jason

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


 http://iridia.ulb.ac.be/~marchal/



  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Jason Resch
What if its not a clairvoyant but a super intelligent alien with an
accuracy of 99.% does that change your answer?

What if it is a human psychologist with an accuracy of 80%?

One of my friends said if it was 100% he would one-box, but if it was even
slightly below 100% he would take two boxes.

Jason

On Wed, Dec 10, 2014 at 10:36 AM, Terren Suydam terren.suy...@gmail.com
wrote:

 Same here, just one box. The paradox hinges on clairvoyance and how we
 could expect that to be sensible in the universe we live in. To my way of
 thinking, clairvoyance entails a sort of backwards-causation which I think
 can be made sensible in a multiverse. To wit, you make your choice (one
 box, say), and that collapses the possible universes you are in to the
 one in which the clairvoyant predicted you would choose one box, and so you
 get the money.

 In other words, the justification for choosing both boxes - that the
 contents of the boxes have already been determined - fails to provide an
 account of clairvoyance that can be made sensible. Or rather, I just can't
 think of one.

 Terren

 On Wed, Dec 10, 2014 at 5:13 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 10 Dec 2014, at 09:55, Jason Resch wrote:

 I started quite a lively debate at work recently by bringing up Newcomb's
 Paradox. We debated topics ranging from the prisoner's dilemma to the
 halting problem, from free will to retro causality, from first person
 indeterminacy to Godel's incompleteness.

 My colleagues were about evenly split between one-boxing and two-boxing,
 and I was curious if there would be any more consensus among the members of
 this list. If you're unfamiliar with the problem there are descriptions
 here:

 http://www.philosophyexperiments.com/newcomb/
 http://en.wikipedia.org/wiki/Newcomb%27s_paradox

 If you reach a decision, please reply with whether your strategy would be
 to take one box or two, what assumptions you make, and why you think your
 strategy is best. I don't want to bias the results so I'll provide my
 answer in a follow-up post.


 I take only one box. Non randomly! I use my free-will ...
 To be sure and make thing simpler, I assume the predictor is 100%
 accurate.

 Bruno




 Jason

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


 http://iridia.ulb.ac.be/~marchal/



  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Telmo Menezes
On Wed, Dec 10, 2014 at 9:55 AM, Jason Resch jasonre...@gmail.com wrote:

 I started quite a lively debate at work recently by bringing up Newcomb's
 Paradox. We debated topics ranging from the prisoner's dilemma to the
 halting problem, from free will to retro causality, from first person
 indeterminacy to Godel's incompleteness.

 My colleagues were about evenly split between one-boxing and two-boxing,
 and I was curious if there would be any more consensus among the members of
 this list. If you're unfamiliar with the problem there are descriptions
 here:

 http://www.philosophyexperiments.com/newcomb/
 http://en.wikipedia.org/wiki/Newcomb%27s_paradox

 If you reach a decision, please reply with whether your strategy would be
 to take one box or two, what assumptions you make, and why you think your
 strategy is best. I don't want to bias the results so I'll provide my
 answer in a follow-up post.


Employ a quantum noise source to generate a random decision. With it,
generate a very slightly unbalanced coin flip. Use it to decide on one box
vs. two boxes. Give one box a very slight advantage. The only rational
choice for the oracle is to bet on one box. You get 1 million with a
probability of 0.5 or the full 1.01 million with a probability of
0.4.

Telmo.




 Jason

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Quentin Anciaux
I would say if his predictions are better than 50% accuracy, I would go for
one box... But I would still need a proof that they effectively are... and
maybe also an explanation of how the predictions are made...

Regards,
Quentin

2014-12-10 17:48 GMT+01:00 Jason Resch jasonre...@gmail.com:

 What if its not a clairvoyant but a super intelligent alien with an
 accuracy of 99.% does that change your answer?

 What if it is a human psychologist with an accuracy of 80%?

 One of my friends said if it was 100% he would one-box, but if it was even
 slightly below 100% he would take two boxes.

 Jason


 On Wed, Dec 10, 2014 at 10:36 AM, Terren Suydam terren.suy...@gmail.com
 wrote:

 Same here, just one box. The paradox hinges on clairvoyance and how we
 could expect that to be sensible in the universe we live in. To my way of
 thinking, clairvoyance entails a sort of backwards-causation which I think
 can be made sensible in a multiverse. To wit, you make your choice (one
 box, say), and that collapses the possible universes you are in to the
 one in which the clairvoyant predicted you would choose one box, and so you
 get the money.

 In other words, the justification for choosing both boxes - that the
 contents of the boxes have already been determined - fails to provide an
 account of clairvoyance that can be made sensible. Or rather, I just can't
 think of one.

 Terren

 On Wed, Dec 10, 2014 at 5:13 AM, Bruno Marchal marc...@ulb.ac.be wrote:


 On 10 Dec 2014, at 09:55, Jason Resch wrote:

 I started quite a lively debate at work recently by bringing up
 Newcomb's Paradox. We debated topics ranging from the prisoner's dilemma to
 the halting problem, from free will to retro causality, from first person
 indeterminacy to Godel's incompleteness.

 My colleagues were about evenly split between one-boxing and two-boxing,
 and I was curious if there would be any more consensus among the members of
 this list. If you're unfamiliar with the problem there are descriptions
 here:

 http://www.philosophyexperiments.com/newcomb/
 http://en.wikipedia.org/wiki/Newcomb%27s_paradox

 If you reach a decision, please reply with whether your strategy would
 be to take one box or two, what assumptions you make, and why you think
 your strategy is best. I don't want to bias the results so I'll provide my
 answer in a follow-up post.


 I take only one box. Non randomly! I use my free-will ...
 To be sure and make thing simpler, I assume the predictor is 100%
 accurate.

 Bruno




 Jason

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


 http://iridia.ulb.ac.be/~marchal/



  --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
All those moments will be lost in time, like tears in rain. (Roy
Batty/Rutger Hauer)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Stathis Papaioannou
On Thursday, December 11, 2014, Terren Suydam terren.suy...@gmail.com
wrote:

 Same here, just one box. The paradox hinges on clairvoyance and how we
 could expect that to be sensible in the universe we live in. To my way of
 thinking, clairvoyance entails a sort of backwards-causation which I think
 can be made sensible in a multiverse. To wit, you make your choice (one
 box, say), and that collapses the possible universes you are in to the
 one in which the clairvoyant predicted you would choose one box, and so you
 get the money.

 In other words, the justification for choosing both boxes - that the
 contents of the boxes have already been determined - fails to provide an
 account of clairvoyance that can be made sensible. Or rather, I just can't
 think of one.

 Terren


Clairvoyance, as you call it, is not logically problematic. What is
logically problematic is free will. The paradox seems to be such because
people believe that their decisions are neither determined nor random,
which is nonsense.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread LizR
Maybe it's a delayed choice experiment and retroactively collapses the wave
function, so your choice actually *does* determine the contents of the
boxes.

(Just a thought...maybe the second box has a cat in it...)

On 11 December 2014 at 09:10, Stathis Papaioannou stath...@gmail.com
wrote:



 On Thursday, December 11, 2014, Terren Suydam terren.suy...@gmail.com
 wrote:

 Same here, just one box. The paradox hinges on clairvoyance and how we
 could expect that to be sensible in the universe we live in. To my way of
 thinking, clairvoyance entails a sort of backwards-causation which I think
 can be made sensible in a multiverse. To wit, you make your choice (one
 box, say), and that collapses the possible universes you are in to the
 one in which the clairvoyant predicted you would choose one box, and so you
 get the money.

 In other words, the justification for choosing both boxes - that the
 contents of the boxes have already been determined - fails to provide an
 account of clairvoyance that can be made sensible. Or rather, I just can't
 think of one.

 Terren


 Clairvoyance, as you call it, is not logically problematic. What is
 logically problematic is free will. The paradox seems to be such because
 people believe that their decisions are neither determined nor random,
 which is nonsense.


 --
 Stathis Papaioannou

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Jason Resch
Telmo,

Very creative solution! I think you may have been the first to out-smart
the super-intelligence. Although would you risk $1,000,000 to gain the
extra $1,000 on the belief that the super intelligence hasn't figured out a
way to predict or account for collapse?  QM could always be wrong of
course, or maybe the super intelligence knows we're in a simulation and has
reverse engineered the state of the pseudorandom number generator used to
give the appearance of collapse/splitting. :-)

Jason

On Wed, Dec 10, 2014 at 10:59 AM, Telmo Menezes te...@telmomenezes.com
wrote:



 On Wed, Dec 10, 2014 at 9:55 AM, Jason Resch jasonre...@gmail.com wrote:

 I started quite a lively debate at work recently by bringing up Newcomb's
 Paradox. We debated topics ranging from the prisoner's dilemma to the
 halting problem, from free will to retro causality, from first person
 indeterminacy to Godel's incompleteness.

 My colleagues were about evenly split between one-boxing and two-boxing,
 and I was curious if there would be any more consensus among the members of
 this list. If you're unfamiliar with the problem there are descriptions
 here:

 http://www.philosophyexperiments.com/newcomb/
 http://en.wikipedia.org/wiki/Newcomb%27s_paradox

 If you reach a decision, please reply with whether your strategy would be
 to take one box or two, what assumptions you make, and why you think your
 strategy is best. I don't want to bias the results so I'll provide my
 answer in a follow-up post.


 Employ a quantum noise source to generate a random decision. With it,
 generate a very slightly unbalanced coin flip. Use it to decide on one box
 vs. two boxes. Give one box a very slight advantage. The only rational
 choice for the oracle is to bet on one box. You get 1 million with a
 probability of 0.5 or the full 1.01 million with a probability of
 0.4.

 Telmo.




 Jason

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Jason Resch
How boring though, that everyone agrees with one-boxing...

Jason

On Wed, Dec 10, 2014 at 10:53 PM, Jason Resch jasonre...@gmail.com wrote:

 Telmo,

 Very creative solution! I think you may have been the first to out-smart
 the super-intelligence. Although would you risk $1,000,000 to gain the
 extra $1,000 on the belief that the super intelligence hasn't figured out a
 way to predict or account for collapse?  QM could always be wrong of
 course, or maybe the super intelligence knows we're in a simulation and has
 reverse engineered the state of the pseudorandom number generator used to
 give the appearance of collapse/splitting. :-)

 Jason


 On Wed, Dec 10, 2014 at 10:59 AM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Wed, Dec 10, 2014 at 9:55 AM, Jason Resch jasonre...@gmail.com
 wrote:

 I started quite a lively debate at work recently by bringing up
 Newcomb's Paradox. We debated topics ranging from the prisoner's dilemma to
 the halting problem, from free will to retro causality, from first person
 indeterminacy to Godel's incompleteness.

 My colleagues were about evenly split between one-boxing and two-boxing,
 and I was curious if there would be any more consensus among the members of
 this list. If you're unfamiliar with the problem there are descriptions
 here:

 http://www.philosophyexperiments.com/newcomb/
 http://en.wikipedia.org/wiki/Newcomb%27s_paradox

 If you reach a decision, please reply with whether your strategy would
 be to take one box or two, what assumptions you make, and why you think
 your strategy is best. I don't want to bias the results so I'll provide my
 answer in a follow-up post.


 Employ a quantum noise source to generate a random decision. With it,
 generate a very slightly unbalanced coin flip. Use it to decide on one box
 vs. two boxes. Give one box a very slight advantage. The only rational
 choice for the oracle is to bet on one box. You get 1 million with a
 probability of 0.5 or the full 1.01 million with a probability of
 0.4.

 Telmo.




 Jason

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Jason Resch
We also developed an analogous version of the Newcomb's paradox, but
couched in the form of the prisoner's dilemma:

If you were forced to play in the prisoners dilemma against yourself (in a
fully deterministic setting such as with both of your minds uploaded to a
computer), would you defect or cooperate (assuming you're playing
completely selfishly with no regard for your opponent)? In classic
prisoner's dilemma defecting is always better than cooperating, because
it's a better choice when your opponent defects, and it is a better choice
when your opponent cooperates. (Just like some decision theories say its
always better to take two boxes, because no matter what is in the opaque
box, you get an extra $1,000 on top). However, in this situation (when
playing against a deterministic copy of yourself) your choice is correlated
to (though not physically / causally related) to the choice made by your
opponent. So those who one-box are more apt to say co-operation is better
than defecting in this case, since the Defect/Cooperate, and
Cooperate/Defect outcomes are no longer possible. -- Just as with an
accurate predictor, getting $0 or getting $1,001,000 is not possible.

Is there anyone here who thinks two-boxing (or defecting in the above
choice) is the correct decision?

Jason

On Wed, Dec 10, 2014 at 10:54 PM, Jason Resch jasonre...@gmail.com wrote:

 How boring though, that everyone agrees with one-boxing...

 Jason


 On Wed, Dec 10, 2014 at 10:53 PM, Jason Resch jasonre...@gmail.com
 wrote:

 Telmo,

 Very creative solution! I think you may have been the first to out-smart
 the super-intelligence. Although would you risk $1,000,000 to gain the
 extra $1,000 on the belief that the super intelligence hasn't figured out a
 way to predict or account for collapse?  QM could always be wrong of
 course, or maybe the super intelligence knows we're in a simulation and has
 reverse engineered the state of the pseudorandom number generator used to
 give the appearance of collapse/splitting. :-)

 Jason


 On Wed, Dec 10, 2014 at 10:59 AM, Telmo Menezes te...@telmomenezes.com
 wrote:



 On Wed, Dec 10, 2014 at 9:55 AM, Jason Resch jasonre...@gmail.com
 wrote:

 I started quite a lively debate at work recently by bringing up
 Newcomb's Paradox. We debated topics ranging from the prisoner's dilemma to
 the halting problem, from free will to retro causality, from first person
 indeterminacy to Godel's incompleteness.

 My colleagues were about evenly split between one-boxing and
 two-boxing, and I was curious if there would be any more consensus among
 the members of this list. If you're unfamiliar with the problem there are
 descriptions here:

 http://www.philosophyexperiments.com/newcomb/
 http://en.wikipedia.org/wiki/Newcomb%27s_paradox

 If you reach a decision, please reply with whether your strategy would
 be to take one box or two, what assumptions you make, and why you think
 your strategy is best. I don't want to bias the results so I'll provide my
 answer in a follow-up post.


 Employ a quantum noise source to generate a random decision. With it,
 generate a very slightly unbalanced coin flip. Use it to decide on one box
 vs. two boxes. Give one box a very slight advantage. The only rational
 choice for the oracle is to bet on one box. You get 1 million with a
 probability of 0.5 or the full 1.01 million with a probability of
 0.4.

 Telmo.




 Jason

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread Stathis Papaioannou
On Thursday, December 11, 2014, LizR lizj...@gmail.com wrote:

 Maybe it's a delayed choice experiment and retroactively collapses the
 wave function, so your choice actually *does* determine the contents of
 the boxes.

 (Just a thought...maybe the second box has a cat in it...)

 No such trickery is required. Consider the experiment where the subject is
a computer program and the clairvoyant is you, with the program's source
code and inputs. You will always know exactly what the program will do by
running it, including all its deliberations. If it is the sort of program
that decides to choose both boxes it will lose the million dollars. The
question of whether it *ought to* choose both boxes or one is meaningless
if it is a deterministic program, and the paradox arises from failing to
understand this.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's Paradox

2014-12-10 Thread meekerdb

On 12/10/2014 9:02 PM, Jason Resch wrote:
We also developed an analogous version of the Newcomb's paradox, but couched in the form 
of the prisoner's dilemma:


If you were forced to play in the prisoners dilemma against yourself (in a fully 
deterministic setting such as with both of your minds uploaded to a computer), would you 
defect or cooperate (assuming you're playing completely selfishly with no regard for 
your opponent)? In classic prisoner's dilemma defecting is always better than 
cooperating, because it's a better choice when your opponent defects, and it is a better 
choice when your opponent cooperates. (Just like some decision theories say its always 
better to take two boxes, because no matter what is in the opaque box, you get an extra 
$1,000 on top). However, in this situation (when playing against a deterministic copy of 
yourself) your choice is correlated to (though not physically / causally related) to the 
choice made by your opponent. So those who one-box are more apt to say co-operation is 
better than defecting in this case, since the Defect/Cooperate, and Cooperate/Defect 
outcomes are no longer possible. -- Just as with an accurate predictor, getting $0 or 
getting $1,001,000 is not possible.


Is there anyone here who thinks two-boxing (or defecting in the above choice) is the 
correct decision?


Dunno.  I'll run my simulation and find out.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Newcomb's paradox (with terrestrial predictor)

2002-07-27 Thread Bruno Marchal

At 11:35 +0200 25/07/2002, Bruno Marchal wrote:
My opinion: Giving the hypothesis that the predictor is good, I think
Irene makes the right choice. In both this version and the traditional one.
In real life, though, I would be doubtful that such a predictor can exist.
So I am not sure there can be a pragmatic content in those stories.


So let us made the predictor more terrestrial. Let us
suppose, in a first step, that  he is lazy, fallible, and naive.
(He = She/He/it...).

Lazy:
He *can* predict, but he does not want, due to the hard task involved.

Fallible:
He can be wrong!

Naive:
Because his/her strategy is just to ask you if you are causalist, like
Rachel, or evidentialist like Irene. He is naive because he believes you.
So if you say you are evidentialist, he put 1m dollars in each boxes.
If you say you are causalist, he put nothing in the boxes.

Here too Irene has no problem; she say I am evidentialist and will
win 1m$ taking one boxes. She thrust the predictor like in the infallible
predictor case, but here the predictor thrust her too.

Rachel can say I'm a causalist and then take the two boxes, and win nothing.
(bad use of honesty! I would think).
Rachel can say I'm a causalist and then take one box, and win nothing too.
But perhaps she enjoys because it looks she want deceive the predictor!
Rachel can say I'm a evidentialist, and then takes the two boxes. She wins
a lot!  She loses definitely the predictor thrust. This one will do, now,
the hard predictive task, or may be just buy a lying-test machine! 
(H's very lazy!)

Is not Irene a long way from cooperative game?

?

Bruno











Re: Newcomb's paradox

2002-07-25 Thread Bruno Marchal

At 15:49 -0700 23/07/2002, Hal Finney wrote:


I took the liberty of copying a few paragraphs from James Joyce's
book describing the causalist argument in Newcomb's Paradox.  This is
the best statement of the argument for taking both boxes that I have
seen.  I also included a short response of my own, which describes an
alternate way of viewing the paradox based on multiverse models.
It is at http://www.finney.org/~hal/Newcomb.html.




Why so much fuss for just having 1000$ more than 100$ ?
I would take box A, letting the 1000 dollars in box B as a pourboire for
the predictor :-)

A more symmetric version of the paradox prevents this joke.
The predictor gives you the choice between taking 2 boxes or just
one. In that case you can choose it at will. The predictor said
again that he can predict what you will do; and in case you will take the
two boxes, he will put nothing in it! If you take just one box, he
puts  100$ in each boxes.

This version makes possible to reason in term of Einstein's reality
elements (which he did introduce in the famous EPR paper).

Einstein defines an element of reality by the rule: If I can predict
with certainty the outcome of an experiment I could do (but will not do)
then the outcome I would have find corresponds to an element of reality.

The rational causalist Rachel reasons like that: the predictor is
very good (hypothesis) so by taking one box I will gain 1m$.
Now this does not depend on which box I choose. So I can predict with
certainty that if I take only box A, I will find 1m$ inside, and
similarly I can predict that if I take only box B, I will find 1m$ inside
too. Like in EPR the causalist Rachel supposes some form of locality
(no action at a distance, no action in the past, ...). This gives a
proof that the element of reality owning 1m$ is true for each boxes.
So she decides to take the two boxes. And because the predictor is
indeed good, she wins O$!

The irrational (?) evidencialist Irene reasons like that: the predictor
is very good (hypothesis), so by taking just one box I will gain 1m$
---whichever box I choose!  So let me choose just one box.
And she goes away with her box, and she wins 1m$. Rachel comes by and ask
her if she (Irene) realizes that she has just abandoned 1m$. 'No, said Irene.
'the predictor would have predict I would have take the two boxes, and he
would have put nothing in it. And I would have win 0$. Why not take seriously
the hypothesis that the predictor is good'. -'But still you know there
is 1m$ left in the other box', said Rachel. 'Yes, Irene said, 'but only
because the predictor knew I will take only one box. Now nothing prevent you
of taking the other box'. So thanks to irrational Irene, both of them
wins 1m$. And then they married and get a very happy life! The kind of
life you can get when you manage to handle both cause and evidence, a subtle
mixture of reason and madness perhaps!

My opinion: Giving the hypothesis that the predictor is good, I think
Irene makes the right choice. In both this version and the traditional one.
In real life, though, I would be doubtful that such a predictor can exist.
So I am not sure there can be a pragmatic content in those stories.
I think also Hal makes an interesting point showing perhaps the first (to
my knowledge) rational argument for a role of consciousness/free-will in
collapsing the wave packet(*), or for consciousness deciding the output of a
quantum experience, or, in this version, consciousness making up 
elements of reality (all this deserves perhaps to be more deeply 
scrutinized).
The weakness of the argument comes from the existence of the predictor hyp.

Bruno

(*) Still in Everett sense, not in the Copenhague sense.




Re: Newcomb's paradox

2002-07-24 Thread Saibal Mitra

The very act of predicting what you will choose is equivalent to generating
you virtually and observing what box you will choose. So, when you stand in
front of the two boxes, you don't know if you are in the real world or in
the virtual world. The causal argument is thus invalid.

The only way to beat the (imperfect) experimenter is to try to guess if you
are in the real world or in the virtual world and choose A if you think that
you are in the virtual world and choose A and B if you think that you are in
the real world. If the probability that your guess is right is p, then this
strategy will yield on average P*(10^3 + 10^6) dollars. So you need to be
more than 99.9% sure about your whereabouts.

This suggest that the a perfect simulation of someones brain generates a
virtual reality with a relative measure of 50%.

Saibal



- Oorspronkelijk bericht -
Van: Hal Finney [EMAIL PROTECTED]
Aan: [EMAIL PROTECTED]
Verzonden: dinsdag 23 juli 2002 23:49
Onderwerp: Newcomb's paradox


 I took the liberty of copying a few paragraphs from James Joyce's
 book describing the causalist argument in Newcomb's Paradox.  This is
 the best statement of the argument for taking both boxes that I have
 seen.  I also included a short response of my own, which describes an
 alternate way of viewing the paradox based on multiverse models.
 It is at http://www.finney.org/~hal/Newcomb.html.

 Hal Finney







Newcomb's paradox

2002-07-23 Thread Hal Finney

I took the liberty of copying a few paragraphs from James Joyce's
book describing the causalist argument in Newcomb's Paradox.  This is
the best statement of the argument for taking both boxes that I have
seen.  I also included a short response of my own, which describes an
alternate way of viewing the paradox based on multiverse models.
It is at http://www.finney.org/~hal/Newcomb.html.

Hal Finney




Re: Newcomb's paradox

2002-07-23 Thread George Levy





Hal Finney wrote:
[EMAIL PROTECTED]">
  I took the liberty of copying a few paragraphs from James Joyce'sbook describing the causalist argument in Newcomb's Paradox.  This isthe best statement of the argument for taking both boxes that I haveseen.  I also included a short response of my own, which describes analternate way of viewing the paradox based on multiverse models.It is at http://www.finney.org/~hal/Newcomb.html.Hal Finney
  
In my opinion both evidentialist argument and causalist argument are faulty.
  
  
First let me say that there is no paradox from the experimenter point of
view. He is so smart and knows your own mind so well that he can make accurate
deterministic prediction of your decision to the test. One could compare
the experimenter to a very smart programmer and the subject to an AI system
that the programmer has programmed. It is clear that if the programmer knows
every line of code, has performed several a-priori simulations and had the
opportunity of many debugging sessions with the system, he knows exactly
how the AI system will behave in the experimental situation. He can therefore
be confident in inputting in the system the fact that he knows how the system
will react in the Newcomb experiment. 
  
Therefore, the only apparent paradox is from the point of view of the subject
of the experiment (or from the point of view of the program).The paradox
illustrates several things: 
1) Causality is an illusion that depends on the state of mind of the observer:
The Newcomb experimenter does not perceive any violation in the causal order.
His world, including the subject of the experiment, is purely deterministic.
Yet the Newcomb subject is faced in the apparent violation of the temporal
causal order. The behavior of the experimenter is inconsistent from the
subject's perspective, according to the set of axioms and rules governing
the subject's mind.
2) Free Will also depends on the frame of mind of the observer. In the Programmer/Program
analogy, it is clear that the program has no free will. Its operation is
purely deterministic.
3) Even consciousness is questionable. Is the AI progam conscious? According
to whom? To the AI program itself? Yes! To the programmer? No!
  
What would I do If I was the subject of the experiment. The answer is that
I wouldn't really care one way or another about picking one or two box because
I would know then that the world is inhabited by a super being and that he
is the one who really calls the shot. I could actually refuse to play, just
to prove that I have free will and this is an outcome the experimenter would
not have predicted. Having free will would definitely be more important to
me than a million dollars. If I was the program, I would make sure that the
programmer still has a lot of debugging to do with me! :-)
  
George