In my current way of thinking, the disagreement between Alan Grayson and 
John K. Clark is about two subtly different concepts under the same name, 
"probability". For example, when I read "80% chance of rain today", I may 
think that in some possible futures it will not rain (so probability is 
meaningless), yet I feel an instinctive urge for protection from bad 
weather, so I take my umbrella. We are programmed to act in this way, due 
to Darwinian selection -- but it is a different matter to claim that QM 
(without collapse) issues a probability for each possible outcome so that 
then we are rationally obliged to apply Maximisation of Expected Utility. I 
grant the former but not the latter.

Part of the trouble is that serious philosophical issues about probability 
are still debated, so that there are traps for anyone who deals with these 
things. Here is an example.

> [...] until Alan Grayson sees the end of the race, or somebody tells Alan 
Grayson about it, Alan Grayson can't be certain what world Alan Grayson is 
in. Alan Grayson could be in a world where horse X won or Alan Grayson 
could be in a world where horse Y won, until Alan Grayson receives more 
information Alan Grayson would have to say the odds are 50-50.

If you mean that on sheer ignorance the odds are 50-50, we need some 
clarifications. Strictly speaking, zero information implies "undefined 
probability", or "imprecise probability between 0 and 1". The reason it is 
commonly mistaken as 50-50 is an implied strategy, flipping a coin in case 
of ignorance, but then the odds are of the coin instead of the object of 
the bet. (This strategy works only if the agent is free to choose which 
side of the bet she underwrites.)

For the instrumentalists among us (glad to have you, BTW): the question of 
interest to me is not about which way is best to derive probability from QM 
-- that would be a pointless discussion, I agree! The question is whether 
all of them beg the question, so that we have to think of a rational 
decision theory without probability.

Although Everett's argument (whose improvement I have proposed) grants that 
in the long run (that is, large samples) the Born Rule is practically 
certain to apply, this is not technically the same as probability for each 
single outcome -- though I admit that it works the same, to trigger an 
instinctive impulse. But for a RATIONAL decision theory this probability is 
not granted, IMO.

I can give examples of a decision theory w/o probability, but they would 
dilute the focus of this message.

George K.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b10325e2-03ae-4e2f-bc4b-9e144ef989d7n%40googlegroups.com.

Reply via email to