Rolf Haenni wrote:
> To summarize, if nothing is known about Q (not even whether an independent
> prior probability exists)
What in the world does it mean to ask whether "an independent prior probability
exists"?
Please explain.
> KEVIN S. VAN HORN wrote:
> >...regardless of the value of P(Q), we know from 0 <= P(Q) <= 1 that 0.8 <=
> >P(R) <= 0.9.
> [...]
> >Again, Haenni's theory is losing information by giving unnecessarily
> >loose bounds.
>
> ==> or should we say, YOU are ADDING information??? :-)
I gave a proof. If you disagree with the conclusions, then please either point
out the error in the proof or which particular assumptions used by the proof you
disagree with. Please also note that I phrased the proof in terms that should
be acceptable even to a frequentist who believes that assigning probabilities to
non-repeatable events (such as the existence of God) is the gravest of
heresies. Anticipating your objection to using a prior over Q, I gave one proof
using no prior over Q (assuming that Q is non-repeatable), and a separate proof
using a prior over Q (assuming that it is repeatable).
The real problem is that I am computing bounds on P(R), and you are computing
bounds on the probability that R is provable. We are addressing two entirely
different questions. However, I would argue that the question I am addressing
is much more relevant to solving real-world problems. It certainly fits in
better with the way people naturally think when using common-sense reasoning.
When deciding whether to go out for a picnic, do people ask themselves how
likely it is that it can be *proven* that it will rain or not rain? No, they
just make some judgment as to how likely it is that it will rain. Unless I'm
trying to guess the outcome of a trial, what possible practical use is there to
knowing the respective probabilities that R and (not R) can be *proven* to be
true, that isn't much better served by simply evaluating the probability of R
itself, or the best bounds on this probability I can obtain?