In article <[EMAIL PROTECTED]>,
Thom Baguley  <[EMAIL PROTECTED]> wrote:
>Robert J. MacG. Dawson wrote:

>> [EMAIL PROTECTED] wrote:

>> > In article <[EMAIL PROTECTED]>,
>> >   Jerry Dallal <[EMAIL PROTECTED]> wrote:

>> > > (1) statistical significance usually is unrelated to practice
>> > > importance.

>> > I don't think so. I can think of many examples in which statistical
>> > inference plays an invaluable role in practical applications and
>> > instrumentation, or indeed any "practical" application of a theory etc.
>> > Not just in science, but engineering, e.g aircraft design, studying the
>> > brain, electrical enginerring. Certainly there are examples of
>> > statistical nonsense, e.g. polls, but i wouldn't go so far as to say it
>> > is usually like this.

>>         Chris: That's not what Jerry means. What he's saying is that if your
>> sample size is large enough, a difference may be statistically
>> significant (a term which has a very precise meaning, especially to the
>> Apostles of the Holy 5%) but not large enough to be practically
>> important. [A hypothetical very large sample might show, let us say,
>> that a very expensive diet supplement reduced one's chances of a heart
>> attack by 1/10 of 1%.]  Alternatively, in an imperfectly-controlled
>> study, it may show an effect that - whether large enough to be of
>> interest or not - is too small to ascribe a cause to. [A moderately
>> large study might show that some ethnic group has a 1% higher rate of
>> heart attacks, with amargin of error of +- .2% . But we might have, for
>> an effect of this size, no way of telling whether it's due to genes,
>> diet, socioeconomic factors, recreational drugs, or whatever.]

>I'd add that I think Jerry meant "unrelated" in the sense of independent rather
>than irrelevant (Jerry can correct me if I'm wrong). You can  get important
>significant effects, unimportant significant effects, important non-significant
>effects and unimportant non-significant effects.

>For what its worth, practical importantance also depends on many factors other
>than effect size. These include mutability, generalizabilty, cost, and so on.

This is another reason for not doing something as bad as
significance tests.  

It has been argued that there may be many possible situations
for action based on observations, and that the observations
need to be summarized so that subsequent investigators can
incorporate the studies.  But the significance level, or the
p-value, does not provide this summary; the likelihood 
function does.  Other than the rather ridiculous statement 
of exactly what is accomplished by p-values, of what use is
it, except religion?  





-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to