In article <AC09DC4F4DFCD211A83C00805FE6138D369274@NHQJPK1EX2>,
Magill, Brett <[EMAIL PROTECTED]> wrote:
>Seems to me that hypothesis testing remains an essential step. Take for
>instance the following data that I made up just for the purpose of
>illustration and the correlation matrix it produces:

>VAR1  VAR2
>2.00   2.00
>3.00   2.00
>5.00   6.00
>4.00   2.00
>3.00   1.00

>Correlations
>                               VAR1            VAR2
>VAR1   
>Pearson Correlation    1.000           .765
>Sig. (2-tailed)                .               .132
>N                              5               5

>VAR2
>Pearson Correlation    .765            1.000
>Sig. (2-tailed)                .132            .
>N                              5               5


>Now, .77 is probably a respectable correlation (depending of course on the
>application).  However, the question here is how much faith we have in this
>estimate.  Accepting the traditional alpha level of .05 (because it is not
>real data and so no reason not to) we would say that this is beyond what we
>will accept as the risk of making a Type I error, so we fail to reject the
>null.  This is not to say that the correlation is zero, but for practical
>purposes with this sample, we must treat it as no effect (and here probably
>take into consideration our power).  Effect size is useless without
>significance.  Significance is meaningless without information on effect
>size.


One can make a case for hypothesis testing in SOME
situations.  However, the above example is one which shows
some of what is wrong.

Even classical statisticians, faced with the rudiments of
decision theory, will agree that the significance level to
be used should generally decrease with increasing sample
size.  While there can be problems with small samples, the
converse is that it should increase with decreasing sample
size.  The choice of a significance level, without
consideration of the consequences of incorrect acceptance
if the null is false, fails when one considers the real
problem.

I have seen a paper claim that an effect was not important
because it came out at the .052 level.  This is bad
statistics, and bad science.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


===========================================================================
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to