In article <013201bfa09d$6c2be970$[EMAIL PROTECTED]>,
Robert Dawson <[EMAIL PROTECTED]> wrote:
>> let's say that today ... we as the statistical community decided, by
>> democratic vote, that the concept of 'hypothesis testing' ... which has
>> essentially dominated statistical work for as long as i can remember
>> (which, .... er um ... is a LOT of years!) ... is relegated to the 'we
>USED
>> to do this stuff' category
>> just THINK about this ....
>> what would the vast majority of folks who either do inferential work
>and/or
>> teach it ... DO????
>> what analyses would they be doing? what would they be teaching?
> Everybody would use Bayesian inference. Of course:
>* students would be told in their compulsory intro stats that
> "a posterior probability of 95% or greater is called
> "statistically significant", and we say 'we believe
> the hypothesis'. Anything less than that is called
> "not statistically significant", and we say 'we disbelieve
> the hypothesis'".
Why? What should be done is to use the risk of the procedure,
not the posterior probability. The term "statistically significant"
needs abandoning; it is whether the effect is important enough
that it pays to take it into account.
>* most elementary textbooks would start out by saying, for a chapter or so
>"Use a prior probability of 50%." Then the better ones would point out that
>one can use other priors, and set exercises running roughly as follows:
It should be pointed out from the beginning that ONLY the
loss-prior combination matters. One aspect of looking at
the problem this way is that the prior probability of the
"null hypothesis" turns out to be unimportant; do not dwell
on it, or even bring it in. If one is testing that the
mean of a normal distribution is 0 against the alternative
that it is not, and the mean has variance 1, the behavior
of the prior beyond 10 (or even beyond 3) has little effect
on the procedure obtained.
> "A researcher believes that the prior probability of
> a certain food causing cancer is 61%. She feeds the
> food to a group of 10 mice, ..."
>Then they would go back to using 50% in the next chapter.
I suggest that equally likely be almost eliminated from
introductory probability.
>* Editors would similarly use a Bayesian posterior probability of 95% as
>a criterion for publication. 94% doesn't cut it - see above.
Editors will continue to be stupid.
>* Statisticians trying to tell colleagues in other departments that "you
>can't just make up a prior to give the right result" would be told "that's
>how we've always done it in Necrophiliac Studies, it's a long-standing
>convention".
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558
===========================================================================
This list is open to everyone. Occasionally, less thoughtful
people send inappropriate messages. Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.
For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================