Rob Smith writes:

>I have a question concerning power analysis.  I want to estimate the
>power of a test to detect a difference of a given size.  The problem is
that
>I am using a nonparametric test, a Kruskal-Wallis test.  I have no idea how
>to calculate power for such a test; is it even possible (I found nothing
>about it in the textbooks I checked)?

>When I run an ANOVA on these same data, the p-value is nearly the same
>as for the Kruskal-Wallis.  The 2 differ by less than .03.  So I am
>wondering if it would be reasonable to use the power analysis from the
>parametric test as an estimate of the power of the nonparametric test.
This
>idea is attractive to me because I can find information on the power of
>ANOVA in my textbooks.  What would you do?

Hollander and Wolfe have a nice section about power for the Mann Whitney
Wilcoxon test starting on page 119. A clever person could adapt this for the
Kruskal Wallis test. For example, show that you have adequate power for a
Bonferroni adjusted nonparametric multiple comparison procedure.

There's another issue that you didn't ask about, but I'll tell you anyway
(that's the price you have to pay for asking for free advice). Calculating
power after the data is collected is a controversial area. Most post hoc
power calculations are uninformative, since they are inversely related to
the p-value. You can do okay if you remember to use a clinically relevant
difference in the calculations rather than the observed difference.

You may be better off using confidence intervals to address the issue of
sample size. If your confidence intervals are so narrow that they lie
entirely inside (or entirely outside) a range of clinical indifference, then
your sample size is more than adequate. If they don't and if the intervals
are wide enough to drive a truck through, then your sample size was
inadequate.

Of course, it's a whole lot easier to get confidence intervals for the ANOVA
model, so I might as well reinforce the reply of Don Burrill. If the
parametric and nonparametric tests agree, and the parametric test offers so
much more flexibility, why would you not use it?

Steve Simon, [EMAIL PROTECTED], Standard Disclaimer.
STATS - Steve's Attempt to Teach Statistics: http://www.cmh.edu/stats



===========================================================================
  This list is open to everyone. Occasionally, people lacking respect
  for other members of the list send messages that are inappropriate
  or unrelated to the list's discussion topics. Please just delete the
  offensive email.

  For information concerning the list, please see the following web page:
  http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to