Ronald Parr recently made some comments about self-reinforcing hypotheses.
We can prove mathematically that there is no such thing as a
self-reinforcing hypothesis, if by this we mean a hypothesis whose
posterior probability can only increase, and never decrease, regardless
of the evidence. To see this, suppose that finding that D is true
increases the probability that H is true:
P(H | D, X) > P(H | X).
>From Bayes' Rule we then derive that
P(D | H, X) > P(D | ~H, X),
and so
P(~D | H, X) = 1 - P(D | H, X) < 1 - P(D | ~H, X) = P(~D | H, X),
which when plugged into Bayes' Rule gives us
P(~D | H, X) < P(~D | H, X).
That is, finding out that D is not true *decreases* the probability that
H is true. So there always exists some conceivable evidence that would
decrease the probability of your hypothesis.
Now let's look at Ronald Parr's example of a self-reinforcing
hypothesis.
> [Example of a self-reinforcing hypothesis] For example, I might have the
> prior assumption that if the tooth fairy existed, she would predispose
> people to ask questions about the tooth fairy (more than they would be
> predisposed to do otherwise). Now I entertain the hypothesis that the
> tooth fairy exists. The fact that I have asked the question is evidence
> that I am predisposed to ask the question. This increases my posterior
> on the existence of the tooth fairy. Indeed, every time I ask the
> question my posterior increases. This is clearly bogus since I can use
> this line of reasoning to justify anything. The hypothesis, along with
> the other assumptions I have made form a vicious circle.
Let's call the hypothetical "I" of the above problem Joe. (I don't want
my comments to be misconstrued as disparaging Parr.) One problem with
this example is that Joe's questions about the tooth fairy, combined
with the data on the rest of the populace (who very rarely ask questions
about the tooth fairy), provides very little evidence for a *general*
predisposition to ask questions about the tooth fairy. Let's modify the
example to take care of this problem, and suppose that the tooth-fairy
hypothesis T assumes that the tooth fairy has chosen Joe as her prophet
and therefore only predisposes *him* to ask such questions. Do his
questions then increase the probability of T? That depends on what
other hypotheses are considered. If we factor in the obsession
hypothesis O (that Joe has a psychological disorder causing him to
obsess about the tooth fairy), then his questions no longer give much if
any evidence for T. And the amount of evidence provided by each new
question decreases as more questions are asked: after the first few dozen
times Joe asks questions about the tooth fairy, it's pretty well-established
that he has a predisposition for this.
But the interesting thing about this example is that it's a rigged game:
Joe gets to choose what data values he obtains, and thus avoid any datum
that might decrease the probability of T, thus neatly sidestepping my
proof. But our model doesn't include this fact! If we were careful to
include *all* the information we had about the problem, we would have to
include a cheating hypothesis (that Joe is asking questions in an
attempt to push up the probability of the T hypothesis).
So it seems that the only way you can get something that might be
described as a self-reinforcing hypothesis is to cheat -- that is, to
leave out relevant information and choose the data values yourself.