At 7:34 PM +0000 12/3/01, Jerry Dallal wrote:
>Don't do one-tailed tests.
If you are going to do any tests, it makes more sense to one-tailed
tests. The resulting p value actually means something that folks can
understand: it's the probability the true value of the effect is
opposite to what you have observed.
Example: you observe an effect of +5.3 units, one-tailed p = 0.04.
Therefore there is a probability of 0.04 that the true value is less
than zero.
There was a discussion of this notion a month or so ago. A Bayesian
on this list made the point that the one-tailed p has this meaning
only if you have absolutely no prior knowledge of the true value.
Sure, no problem.
But why test at all? Just show the 95% confidence limits for your
effects, and interpret them: "The effect could be as big as <upper
confidence limit>, which would mean.... Or it could be <lower
confidence limit>, which would represent... Therefore... " Doing it
in this way automatically addresses the question of the power of your
study, which reviewers are starting to ask about. If your study turns
out to be underpowered, you can really impress the reviewers by
estimating the sample size you would (probably) need to get a
clear-cut effect. I can explain, if anyone is listening...
Will
--
Will G Hopkins, PhD FACSM
University of Otago, Dunedin NZ
Sportscience: http://sportsci.org
A New View of Statistics: http://newstats.org
Sportscience Mail List: http://sportsci.org/forum
ACSM Stats Mail List: http://sportsci.org/acsmstats
----------------------------------------------------
Be creative: break rules.
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================