Donald Burrill wrote:
> 
> On Mon, 17 Jul 2000, Simon, Steve, PhD wrote in part:
> 
> > I have a bad joke about statistical software.  I mention a certain
> > software package and say that it is so wonderful.  The best part is
> > that it allows you to run ten different tests of the same hypothesis
> > and then you can pick the test with the smallest p-value.
> 
> Sounds reasonable to me.  Do you have a problem with that?
> 
> After all, for any particular test, the relevant distributional theorem
> generally asserts something to the effect that the probability of a Type
> I error is less than (or less than or equal to) a value, say p1.  If a
> corresponding theorem applies to each of several applicable tests, with
> values p1, p2, ..., p_k, presumably the smallest of these values is
> nearest the true probability.
> 
> The same argument is to be found in many of the standard texts on
> analysis of variance.  In conducting post hoc tests with an experiment-
> wise error rate, one is advised to use the Tukey method for pairwise
> comparisons and the Scheffe' method for more complex comparisons, because
> the confidence intervals are smaller for pairwise comparisons using
> Tukey, and smaller for complex comparisons using Scheffe'.

Excpet that in the case of contingency tables, one test does not
necessarily dominate another.  If, for example,  you were to 
choose the smaller P value from the Pearson chi-square and the 
likelihood ratio tests, your true level would be greater 
than the nominal 0.05 (unless there's been some recent research
I'm unaware of, which is a possibility; last time I checked many
years ago the two tests were "unsurpassed".)


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to