In article <[EMAIL PROTECTED]>,
Jerry Dallal  <[EMAIL PROTECTED]> wrote:
>Herman Rubin wrote:

>> If you believe in fixed level testing, you are following
>> what is essentially a religious superstition.  

>I've heard a lot of people say this. There's religion and then
>there's fundamentalism.  It's one thing to say that significance
>tests have to be used properly.  It's another to say they should be
>abandoned.

Where have I ever said they have to be abandoned?  The
real question is whether one should act as if the null
hypothesis is close enough.

         It seems they've served us quite well for nearly a
>century.

Have they?  This is highly questionable.

         So did alchemy, I suppose.  Still, the way they are used
>in NEJM or JAMA doesn't give me too many sleepless nights.  In fact,
>if I had sleepless nights, I might look at NEJM or JAMA and use
>significance tests to *help* me choose a treatment.

I suggest you look a little more carefully.  You would
probably use more than the significance test.  

>> As it is
>> rarely possible to be sure of anything important from
>> data, what needs to be done is to balance the various
>> consequences of errors.  

>In theory, but what about in practice?  Where are the scores of
>decision theoretic analyses that have exposed the harm done by
>significance tests?

Many have appeared.

         I have the sense that just about any school
>thought--whether frequentist, Bayesian, empirical Bayesian,
>likelihood, decision theoretic--if practiced properly will lead most
>people to the same place.

In many cases, this is so.  I consider the reckless use of
convenient priors to be very dangerous also, and if one
uses a "non-informative" prior, this is analogous to the
use of p values.

        A slavish devotion to significance tests
>seems no different from the simplistic choice of prior distributions
>or a decision-theoretic analyses that makes faulty assessments of
>important courses of action and/or their consequences.

See the above.  As for decision-theoretic analyses, it
is not the statistician's values which should be used.
The statistician should point out which aspects of the
assumptions are important, and one which is NOT of much
importance at all is the prior probability that the 
null, whether point or not, is correct.

The biggest problem with using a decision approach is that
the consumer keeps interjecting ideas like "statistical
significance" or "p values" or "confidence intervals",
which are totally opposed to the idea of reasonable
decision making under uncertainty.  Two-action decision
problems are testing problems, and interval estimation
may well be needed.  

>I use P values.


-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Deptartment of Statistics, Purdue University
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to