[EMAIL PROTECTED] (Nerapa) wrote in message 
news:<[EMAIL PROTECTED]>...
> In what situation we use nonparametric test like Mann-whitbey U test, durbin
> watson test or kruskal -wallis test  than F-test and z-test?

The Durbin-Watson is nonparametric? 

The choice between say Kruskal-Wallis vs one-way ANOVA (F test)
or Mann-Whitney vs t-test depends on a variety of considerations
(not all of them even statistical, realistically).

Where there are common parametric and nonparametric choices in
a given situation, the one which probably matters least is the 
Mann-Whitney/t choice.

Consider: if the distribution is close to normal, there is a tiny
(you might say trivial) difference in power. If the distribution 
is somewhat further from normal, the t-test usually isn't terribly 
badly affected (especially with nearly symmetric distributions) - 
most of the effect is on the significance level, so if you can bear 
not having a type-I error rate close to the nominal one, there's
still not much to choose between them.

If the distribution is heavily skew, there's a stronger argument
for using the Mann-Whitney (or in some circumstances, applying a
transformation to reduce the skewness of the distribution, if a
meaningful transformation exists, and then applying the t-test, 
but then you have to be careful what your hypothesis actually is!).

If there are likely to be a few really extreme outliers, the power
of the t-test can be quite badly affected.

If the values are not independent (e.g. serially correlated), 
the Mann-Whitney is generally more affected than the t.

In the case of one-way ANOVA vs Kruskal-Wallis, the points are 
quite similar to the ones I made above. 

Between other tests,
the power effects (both in terms of loss of power for the
nonparametric test at the parametric assumption and for the
parametric test away from it), the effect on level (of the 
parametric test when far from its assumptions) and effect 
on both tests of things like dependence can all differ 
somewhat from my comments above.

In practice I have a think about the assumptions (or ask 
someone more familiar with the area about them), particularly 
independence and tendency for large outliers,
as well as about what things the intended audience will 
understand and appreciate.

Sometimes I will choose one or the other, sometimes I'll 
calculate both**, sometimes I'll look at a permuation test, 
sometimes at a robustified parametric test, sometimes
(especially with serial dependence) at a more sophisicated
model for the data. In other circumstances I do still other
things.

** if both give similar p-values there is no issue. If they
don't (e.g. one has p=.3 and the other p=.01), it may be an 
indication that an assumption is being violated for one of 
them - in which case you need to be cautious about interpreting 
the results until you understand why they differ so much.

Glen
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to