On 12 Dec 2014, at 02:22, Jason Resch wrote:



On Thu, Dec 11, 2014 at 3:10 PM, LizR <[email protected]> wrote:
On 11 December 2014 at 18:59, Stathis Papaioannou <[email protected]> wrote:

On Thursday, December 11, 2014, LizR <[email protected]> wrote:
Maybe it's a delayed choice experiment and retroactively collapses the wave function, so your choice actually does determine the contents of the boxes.

(Just a thought...maybe the second box has a cat in it...)

No such trickery is required. Consider the experiment where the subject is a computer program and the clairvoyant is you, with the program's source code and inputs. You will always know exactly what the program will do by running it, including all its deliberations. If it is the sort of program that decides to choose both boxes it will lose the million dollars. The question of whether it *ought to* choose both boxes or one is meaningless if it is a deterministic program, and the paradox arises from failing to understand this.

Not trickery, how dare you?! An attempt to give a meaningful answer which actually makes something worthwhile from what appears to be a trivial "paradox" without any real teeth.

But OK since you are determined to belittle my efforts, let's try your approach.

1 wait 10 seconds
2 print "after careful consideration, I have decided to open both boxes"
3 stop

This is what ANY deterministic computer programme (with no added random inputs) would boil down to, although millions of lines of code might take a while to analyse, and the simplest way to find out the answer in practice might be to run it (but each run would give the same result, so once it's been run once we can replace it with my simpler version).

I have to admit I can't see where the paradox is, or why there is any interest in discussing it.


It's probably not a true paradox, but why it seems like one is that depending on which version of decision theory you use, you can be led to two opposite conclusions.


Yes, I think that initially it was a test to see if you believe in "free-will". here I add quotes for reason which I explain later.

The idea is that those who take the two boxes believe in "free-will", because they believe that if they decide to take only the box B, then there is money in the two boxes, and they leaves money on the table for nothing. They believe that somehow they can fool the predictor just because if it was correct, then the two boxes are full of money, so why not taken them both!

But in this list I guess most people have no problem with determinism, and conceive some predictor, outside of the box, and capable to take into account the reasoning above. So we take one box.




About half of people think one-boxing is best, and the other half think two-boxing is best, and more often then not, people from either side think people on the other side are idiots.

It usually mirror well the believer in "free-will", and indeed in the sense that John Clark mocks a lot, and here I agree with him. That notion of free-will negates determinacy, and consider that our decision are not predictable.

That notion of free-will can be refuted. It really makes no sense. But the weaker notion of free-will, which is that that our decision are not predictable *by ourself*, still make sense. If we could find a way to predict ourselves, we would be cured against hesitation and doubt, but computationalism entails, by computer science, that there is no complete cure or vaccination against doubt and hesitation. Even inconsistency does not prevent you from doubting, only unsoundness (insanity) can, in theory.




However, for whatever reason, everyone on this list seems to agree one-boxing is best, so you are missing out on the interesting discussions that can arise from seeing people justify their alternate decision.

You might try to see if what I say above is corroborated. Believer in strong (say) free-will choose two-boxing, and the non believer (like most people here I guess) in that strong free-will choose the one- boxing.




Often two-boxers will say: the predictor's already made his decision, what you decide now can't change the past or alter what's already been done. So you're just leaving money on the table by not taking both boxes. An interesting twist one two-boxer told me was: what would you do if both boxes were transparent, and how does that additional information change what the best choice is?

You are right with the analogy that it is a form of the surprise examination paradox. In this case the teacher says just "today I will do a surprise examination".

The more you take the teacher seriously, the more you make him inconsistent, even insane if you push a bit.

It is a bit like "I predict that tomorrow you will act in in a manner to make this prediction wrong".

Smullyan is correct in seeing a relationship between the examination paradox with the self-referential sentences of the type
p <-> ~[] p,(equivalent to p <-> <>t, = ~ [] f = consistency)
p <-> [] ~p (equivalent to []f), or also with

p <-> []p (equivalent to t = true) ,
or p <-> <>p (equivalent to f = false).

If the box (machine's provability) was knowledge (with []p -> p), this would lead to falsity, frank contradictions. But with provability (a belief predicate, of ideal rational machines) you get those basic fixed points. You can read them by

A machine which asserts the non justifiability of its own beliefs is equivalent to a consistent machine. A machine which asserts the refutability of its own beliefs is equivalent to an inconsistent machine.

A machine which asserts the provability of its own beliefs is equivalent to a correct machine. A machine which asserts the consistency of its own beliefs is equivalent to an incorrect machine.

That introduces the intensional nuances which makes the Theatetus' definition of knowledge, and other variants, working again, and in a way which prevents any machine to confuse her soul/experience with a body/description. Incompleteness refutes Socrates' refutation of Theaetetus. From the first person point of view of a machine, she is NOT a machine. It is coherent with the idea that free-will is not indeterminacy, but self-indeterminacy. The machine can indeed refutes all attempt to make it into a machine, or to predict her behavior in a communicable and communicated way. Emile Post saw this already in 1923. Now, the machine can bet that she is a machine at some description level, and study the consequences. It is a trip *near* inconsistency.

You can see this approach has rough and naive, but it generates already many counter-examples in philosophy, and delineate a lot of confusion possible, between the modalities, like truth, belief, knowable, observable, etc.

In practice, we have a non monotonical layer, so we can eliminate old beliefs and revise them or uptdated them in front of new observation. here the theory above is local. It remains correct as far as we are correct, but the extensional content of the [] and <> changes all the time.

Bruno



Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to