Dear Henry,

I am what one might call an empirical Bayesian. We don't need some
individual's priors. We can simply use the flat prior, which one might say
represents the beliefs of an agent who has no knowledge of the domain and
will therefore learn only from the data. Given this, as far as I can tell,
the Bayesian can do everything the Fisherian can. For example, the
Bayesian's probability interval is almost just like the Fisherian's
confidence interval with an advantage. We don't usually need the normal
approximation. We can really use the posterior Dirichlet distribution.
Bayesian results are also similar in other things the Fisherian can do
(hypothesis testing, etc). 

The main advantage I see of Bayesian statistics is that it apparently can
do things the Fisherian cannot. For example, if I throw 10 straight heads
with a thumbtack, Bayesian methods can compute the probability the next 5
will be heads. I see no comparable Fisherian technique. If I use Ba yes'
rule to estimate a relative frequency, Bayesian methods can determine the
variance (and thereby a probability interval) for the inferred relative
frequency from the variances in the ones used in the calculation. I see no
way the Fisherian can compute a confidence interval.

Sincerely,

Rich



At 10:27 AM 7/8/99 -0400, Henry Kyburg wrote:
>Hume was perfectly correct in his argument, but what his argument
>demonstrated was that INduction was NOT DEduction.  The mystery is why, for
>250 years, so many philosophers have nevertheless insisted that induction
>should
>satisfy deductive standards.  Popper is not an exception: for him "logic" is
>deductive logic, and so the logic of induction is no more than the deductive
>logic of refutation.
>
>The latest primrose path is "Bayesianism".  The idea is that though we
>cannot draw categorical conclusions from observational evidence, we CAN
>attribute appropriate probabilities to those conclusions.  The idea rests on
>a confusion.  What is the "conclusion"?  The conclusion is either "Sentence
>S is probable, relative to the evidence (and some other stuff)" or simply
>"S".  If the conclusion is "Sentence S is probable..." then that is a
>DEductive consequence of the evidence and prior probabilities, assuming that
>the probability calculus is just part of mathematics, i.e., part of logic. 
>Bayesianism itself gives us no grounds for accepting a categorical
>conclusion like "S".  (Note that classical stqtistics does give us such
>grounds (relative to a tolerated level of error): to accept or reject a
>hypothesis, even if it is a statistical hypothesis, is to adopt a
>categorical conclusion S, not to assign a probability to S.)  
>
>What is wrong with Bayesianism?  It depends on prior probabilities; when we
>have justified prior probabilities and we want a posterior probability,
>Bayesianism is just right.  But sometimes we don't have a prior probability,
>and sometimes we want a conclusion concerning frequencies or distributions
>or facts.  The Bayesian is also dependent on a sharp distinction between
>data and hypothesis: the former can, and the latter cannot, be asserted
>without a probability modifier.
>
>Now it could be claimed that all we ever need are posterior probabilities,
>and that probabilities are subjective, so that this is all we CAN do, and to
>do this requires "assumptions" or "prior probabilities.  But there is
>an alternative point of view according to which probabilities are based on
>(not "identified with") frequencies, and according to which induction is
>perfectly possible.
>
>Example: It is a mathematical (set-theoretical) fact that given any property
>P, almost all subsets of a population embody nearly the same proportion of P
>as does the original population.  Given a sample of the population, we will
>rarely go wrong in supposing that the population is similar to the sample in
>its relative frequency of P.  Put more explicitly: unless we have some
>REASON to be skeptical of the sample, we should take it as representative. 
>If we find 20% P's in our sample, we do not conclude "Probably the
>population contains about 20% P's (though that is true) but categorically,
>"The population contains about 20% P's."  Of course we must be prepared to
>abandon this claim in the face of new evidence; but nobody (sensible) ever
>claimed that induction was incorrigible.
>
>The issue depends on whether an inductive logic can be developed, and on
>whether induction is a more efficient procedure for organizing and
>representing knowledge than manipulating probabilities.  I'd say the answer
>to both questions is 'yes', but that's a matter of conjecture, not
>inference.
>
>Some references: Mine: Probability and the Logic of Rational Belief (1961);
>The Logical Foundations of Statistical Inference (1974); A survey, about
>1970: Probability and Inductive Logic; related: Epistemology and Inference,
>1983.  For related material, see Isaac Levi, Gambling with Truth, and also
>his Enterprise of Knowledge.
>
>Cheers,
>
>Henry 
>
>

Reply via email to