Bill Jefferys, <[EMAIL PROTECTED]>, writes:
> >> Ockham's razor is a consequence of probability theory, if you look at
> > > things from a Bayesian POV, as I do.
> This is well known in Bayesian circles as the Bayesian Ockham's
> Razor. A simple discussion is found in the paper that Jim Berger and
> I wrote:
This is an interesting paper, however it uses a slightly unusual
interpretation of Ockham's Razor. Usually this is stated as that the
simpler theory is preferred, or as your paper says, "an explanation of
the facts should be no more complicated than is necessary." However the
bulk of your paper seems to use a different definition, which is that
the simpler theory is the one which is more easily falsified and which
makes sharper predictions.
I think most people have an intuitive sense of what "simpler" means, and
while being more easily falsified frequently means being simpler, they
aren't exactly the same. It is true that a theory with many parameters
is both more complex and often less easily falsified, because it has more
knobs to tweak to try to match the facts. So the two concepts often do
But not always. You give the example of a strongly biased coin being
a simpler hypothesis than a fair coin. I don't think that is what
most people mean by "simpler". If anything, the fair coin seems like
a simpler hypothesis (by the common meaning) since a biased coin has a
parameter to tweak, the degree of bias.
By equating "simpler" with "more easily falsified" you are able to tie it
into the Bayesian paradigm, which essentially deals with falsifiability.
A more easily falsified theory gets a Bayesian boost when it happens to
be correct, because that was a priori unlikely. But I don't think you
can legitimately say that this is a Bayesian version of Ockham's Razor,
because you have to use this rather specialized definition of simple,
which is more restricted than what people usually mean when they are