Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-10 Thread Brian Cade

Yes, I think we need to be more careful about what is being discussed.  If
you are using nonparametric to only imply tests based on ranks like M-W,
that is a very different definition than if you are referring to tests based
on permutation of test statisics under the null hypothesis.  While M-W and
other rank tests can be evaluated and indeed were developed from a
permutation framework, the possibility of doing a far greater range of tests
in the permutation framework exists.  We can do permutation tests for
conditional means in linear models, where the permutation version of the F
tests will differ little from the normal theory versions of the F-tests, but
clearly we are testing estimates of parameters.  We can do similar
procedures but where were estimating conditional medians (or some other
quantile) in a linear model, and the estimates and test statistics will have
very different statistical performance than estimates and permutation tests
for conditional means.  But these tests still involve tests for estimates of
parameters, just not the mean.  We can also perform omnibus distributional
tests such as MRPP where no specific parameter is being tested.  The real
advantage of thinking about "nonparametric" or "distribution-free"
approaches is that by judicious use of certain test statistics or estimates
that are evaluated via permutation theory, it is possible to detect
important, relevant effects that are not detected well with tests and
estimates of means (whether evaluated by permutation theory on normal
theory).

Brian Cade (USGS)

Rich Ulrich wrote:

>  - I have a comment on an offhand remark of Glen's, at the start of
> his interesting posting -
>
> On Tue, 07 Dec 1999 15:58:11 +1100, Glen Barnett
> <[EMAIL PROTECTED]> wrote:
>
> > Alex Yu wrote:
> > >
> > > Disadvantages of non-parametric tests:
> > >
> > > Losing precision: Edgington (1995) asserted that when more precise
> > > measurements are available, it is unwise to degrade the precision by
> > > transforming the measurements into ranked data.
> >
> > So this is an argument against rank-based nonparametric tests
> > rather than nonparametric tests in general. In fact, I think
> > you'll find Edgington highly supportive of randomization procedures,
> > which are nonparametric.
> >
>  - In my vocabulary, these days, "nonparametric"  starts out with data
> being ranked, or otherwise being placed into categories -- it is the
> infinite parameters involved in that sort of non-reversible re-scoring
> which earns the label, nonparametric.  (I am still trying to get my
> definition to be complete and concise.)
>
> I know that when *nonparametric*  and  *distribution-free*  were the
> two alternatives to ANOVAs, either of the two labels was slapped onto
> people's pet procedures, fairly  indiscriminately;  and a lack of
> discrimination seems to have widened to encompass  *robust*,  later
> on.  Okay, I see that exact evaluation by randomization of a fixed
> sample does not use a t or F distribution for its p-levels.   Okay, I
> see that it is not ANOVA.   But, I'm sorry,  I don't regard a test as
> nonparametric which *does*  preserve and use the original metric and
> means.  Comparison of means is parametric, and that contrasts to
> nonparametric.
>
> Similarly, bootstrapping is a method of "robust variance estimation"
> but it does not change the metric like a power transformation does, or
> abandon the metric like a rank-order transformation does.  If it were
> proper  terminology to say randomization is nonparametric, you would
> probably want to say bootstrapping is nonparametric, too.  (I think
> some people have done so; but it is not widespread.)
>
> --
> Rich Ulrich, [EMAIL PROTECTED]
> http://www.pitt.edu/~wpilib/index.html





Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-09 Thread Robert Dawson

Rich Strauss wrote:
> In my fields of interest (ecology and evolutionary biology), it is
becoming
> increasing common to refer to two "kinds" of bootstrapping: nonparametric
> bootstrapping, in which replicate samples are drawn randomly with
> replacement from the original sample; and parametric bootstrapping, in
> which samples are drawn randomly from a (usually normal) distribution
> having the same mean and variance as the original sample.

I suppose the justification for the latter is that it avoids certain
dissimilarities between the true and empirical distributions (eg,
granularity).  Presumably smoothing the empirical distribution would have
the same effect, with less violence to the true shape of the distribution.
A kernel smoother would be particularly easy, as it would correspond to
adding a small random perturbation to each element of the bootstrap sample.
Does anybody know a good source on this?  In particular, how does one decide
on the shape and size of the perturbation?  Or are there good reasons not to
do this?
-Robert Dawson



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-08 Thread Glen Barnett

Rich Ulrich wrote:
>  - In my vocabulary, these days, "nonparametric"  starts out with data
> being ranked, or otherwise being placed into categories -- it is the
> infinite parameters involved in that sort of non-reversible re-scoring
> which earns the label, nonparametric.  (I am still trying to get my
> definition to be complete and concise.)

Well, I am happy for you to use this definition of nonparametric now 
that you've said what you want it to mean, but it isn't exactly
what most statisticians - including those of us that distinguish
between the terms "distribution-free" and "nonparametric" - mean 
by "nonparametric", so you'll have to excuse my earlier ignorance 
of your definition.

If my recollection is correct, a parametric procedure is where the
entire distribution is specified up to a finite number of parameters,
whereas a nonparametric procedure is one where the distribution 
can't be/isn't specified with only a finite number of unspecified
parameters. This typically includes the usual distribution-free 
procedures, including many rank-based procedures, but it also 
includes many other things - including some that don't transform 
the data in any way, and even some based on means.

So, for example, ordinary simple linear regression is parametric,
because the distribution of y|x is specified, up to the value of 
the parameters specifying the intercept and slope of the line, and
the variance about the line.

Nonparametric regression (as the term is typically  
used in the literature), by contrast, is effectively
infinite-parametric, because the distribution of y|x
doesn't depend only on a finite number of parameters 
(often the distribution *about* E[y|x] is parametric 
- typically gaussian - but E[y|x] itself is where the 
infinite-parametric part comes from).

Nonparametric regression would not seem to fit your definition 
of "nonparametric", since your usage seems to require some
loss of information through ranking or categorisation. 

Once we start using the same terminology, we tend to find the
disagreements die down a bit. 

Glen



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-08 Thread Glen Barnett

Frank E Harrell Jr wrote:
> 
> > > Alex Yu wrote:
> > > >
> > > > Disadvantages of non-parametric tests:
> > > >
> > > > Losing precision: Edgington (1995) asserted that when more precise
> > > > measurements are available, it is unwise to degrade the precision by
> > > > transforming the measurements into ranked data.
> 
> Edgington's comment is off the mark in most cases.  The efficiency of the
> Wilcoxon-Mann-Whitney test is 3/pi (0.96) with respect to the t-test
> IF THE DATA ARE NORMAL.  If they are non-normal, the relative
> efficiency of the Wilcoxon test can be arbitrarily better than the t-test.
> Likewise, Spearman's correlation test is quite efficient (I think the
> efficiency is 9/pi^2) relative to the Pearson r test if the data are
> bivariate normal.
> 
> Where you lose efficiency with nonparametric methods is with estimation
> of absolute quantities, not with comparing groups or testing correlations.
> The sample median has efficiency of only 2/pi against the sample mean
> if the data are from a normal distribution.

Yes, the median is inefficient at the normal. This is the
location estimator corresponding to the sign test in the one-sample
case. But if you use the location estimator corresponding to the 
signed-rank test (say) instead, the efficiency improves substantially.

Glen



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-08 Thread Glen Barnett

Robert Dawson wrote:

[a long description of an instransitivity problem with WMW]

This is very interesting!

I'm interested to know what happens in these cases with 
Kruskal-Wallis - presumably it will reject.

It does make the point (which I always try to make clear
to people) that unless you have a shift-alternative (*or*
what would be a shift-alternative after a monotonic 
transformation), you probably need to think about the
question of interest more carefully. (i.e. what is it
you're really interested in?) It often turns out in those
cases that any difference in distribution is of interest, 
but good power against location shift is desired. This
can be done without pounding WMW's square peg into that
particular round hole.

Glen



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-08 Thread Jan de Leeuw

Parametric/Nonparametric bootstrap  is standard terminology, used in 
the books by Efrom/Tibshirani, Davison/Hinkley, Chernick, Shao/Tu, 
and so on. It's not new, it's by now 20 years old. The
parametric bootstrap is already in Efron, 1979, it's equally 
traditional as the nonparametric
one. Both are form of MC simulation (or both are not).


At 8:12 PM -0600 12/8/99, Rich Strauss wrote:
>At 12:04 PM 12/8/99 -0500, Rich Ulrich wrote:
>
>-- snip --
>  >Similarly, bootstrapping is a method of "robust variance estimation"
>  >but it does not change the metric like a power transformation does, or
>  >abandon the metric like a rank-order transformation does.  If it were
>  >proper  terminology to say randomization is nonparametric, you would
>  >probably want to say bootstrapping is nonparametric, too.  (I think
>  >some people have done so; but it is not widespread.)
>
>In my fields of interest (ecology and evolutionary biology), it is becoming
>increasing common to refer to two "kinds" of bootstrapping: nonparametric
>bootstrapping, in which replicate samples are drawn randomly with
>replacement from the original sample; and parametric bootstrapping, in
>which samples are drawn randomly from a (usually normal) distribution
>having the same mean and variance as the original sample.  The former is
>bootstrapping in the traditional sense, of course, while the latter is a
>form of Monte Carlo simulation.  Unfortunately, the new terminology seems
>to be spreading rapidly.
>
>Rich Strauss
>
>
>
>
>
>
>Dr Richard E Strauss
>Biological Sciences
>Texas Tech University
>Lubbock TX 79409-3131
>
>Email: [EMAIL PROTECTED]
>Phone: 806-742-2719
>Fax: 806-742-2963
>

===
Jan de Leeuw; Professor and Chair, UCLA Department of Statistics;
US mail: 8142 Math Sciences Bldg, Box 951554, Los Angeles, CA 90095-1554
phone (310)-825-9550;  fax (310)-206-5658;  email: [EMAIL PROTECTED]
http://www.stat.ucla.edu/~deleeuw and http://home1.gte.net/datamine/

  No matter where you go, there you are. --- Buckaroo Banzai




Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-08 Thread Rich Strauss

At 12:04 PM 12/8/99 -0500, Rich Ulrich wrote:

-- snip -- 
>Similarly, bootstrapping is a method of "robust variance estimation"
>but it does not change the metric like a power transformation does, or
>abandon the metric like a rank-order transformation does.  If it were
>proper  terminology to say randomization is nonparametric, you would
>probably want to say bootstrapping is nonparametric, too.  (I think
>some people have done so; but it is not widespread.)

In my fields of interest (ecology and evolutionary biology), it is becoming
increasing common to refer to two "kinds" of bootstrapping: nonparametric
bootstrapping, in which replicate samples are drawn randomly with
replacement from the original sample; and parametric bootstrapping, in
which samples are drawn randomly from a (usually normal) distribution
having the same mean and variance as the original sample.  The former is
bootstrapping in the traditional sense, of course, while the latter is a
form of Monte Carlo simulation.  Unfortunately, the new terminology seems
to be spreading rapidly.

Rich Strauss






Dr Richard E Strauss
Biological Sciences  
Texas Tech University   
Lubbock TX 79409-3131

Email: [EMAIL PROTECTED]
Phone: 806-742-2719
Fax: 806-742-2963 




Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-08 Thread Frank E Harrell Jr

> > Alex Yu wrote:
> > >
> > > Disadvantages of non-parametric tests:
> > >
> > > Losing precision: Edgington (1995) asserted that when more precise
> > > measurements are available, it is unwise to degrade the precision by
> > > transforming the measurements into ranked data.

Edgington's comment is off the mark in most cases.  The efficiency of the
Wilcoxon-Mann-Whitney test is 3/pi (0.96) with respect to the t-test
IF THE DATA ARE NORMAL.  If they are non-normal, the relative
efficiency of the Wilcoxon test can be arbitrarily better than the t-test.
Likewise, Spearman's correlation test is quite efficient (I think the
efficiency is 9/pi^2) relative to the Pearson r test if the data are
bivariate normal.

Where you lose efficiency with nonparametric methods is with estimation
of absolute quantities, not with comparing groups or testing correlations.
The sample median has efficiency of only 2/pi against the sample mean
if the data are from a normal distribution.
--
Frank E Harrell Jr
Professor of Biostatistics and Statistics
Division of Biostatistics and Epidemiology
Department of Health Evaluation Sciences
University of Virginia School of Medicine
http://hesweb1.med.virginia.edu/biostat




Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-08 Thread Rich Ulrich

 - I have a comment on an offhand remark of Glen's, at the start of
his interesting posting -

On Tue, 07 Dec 1999 15:58:11 +1100, Glen Barnett
<[EMAIL PROTECTED]> wrote:

> Alex Yu wrote:
> > 
> > Disadvantages of non-parametric tests:
> > 
> > Losing precision: Edgington (1995) asserted that when more precise
> > measurements are available, it is unwise to degrade the precision by
> > transforming the measurements into ranked data.
> 
> So this is an argument against rank-based nonparametric tests
> rather than nonparametric tests in general. In fact, I think
> you'll find Edgington highly supportive of randomization procedures,
> which are nonparametric.
> 
 - In my vocabulary, these days, "nonparametric"  starts out with data
being ranked, or otherwise being placed into categories -- it is the
infinite parameters involved in that sort of non-reversible re-scoring
which earns the label, nonparametric.  (I am still trying to get my
definition to be complete and concise.)

I know that when *nonparametric*  and  *distribution-free*  were the
two alternatives to ANOVAs, either of the two labels was slapped onto
people's pet procedures, fairly  indiscriminately;  and a lack of
discrimination seems to have widened to encompass  *robust*,  later
on.  Okay, I see that exact evaluation by randomization of a fixed
sample does not use a t or F distribution for its p-levels.   Okay, I
see that it is not ANOVA.   But, I'm sorry,  I don't regard a test as
nonparametric which *does*  preserve and use the original metric and
means.  Comparison of means is parametric, and that contrasts to
nonparametric.

Similarly, bootstrapping is a method of "robust variance estimation"
but it does not change the metric like a power transformation does, or
abandon the metric like a rank-order transformation does.  If it were
proper  terminology to say randomization is nonparametric, you would
probably want to say bootstrapping is nonparametric, too.  (I think
some people have done so; but it is not widespread.)

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-07 Thread Robert Dawson

Glenn Barnett wrote:
>
> But since WMW is completely insensitive to a change in spread without
> a change in location, if either were possible, a rejection would
> imply that there was indeed a location difference of some kind. This
> objection strikes me as strange indeed. Does Johnson not understand
> what WMW is doing? Why on earth does he think that a t-test suffers
> any less from these problems than WMW?
>
> Similarly, a change in shape sufficient to get a rejection of a WMW
> test would imply a change in location (in the sense that the "middle"
> had moved, though the term 'location' becomes somewhat harder to pin
> down precisely in this case).  e.g. (use a monospaced font to see this):
>
> :. .:
> ::.   =>  .::
> ...   ...
> a b   a b
>
> would imply a different 'location' in some sense, which WMW will
> pick up. I don't understand the problem - a t-test will also reject
> in this case; it suffers from this drawback as well (i.e. they are
> *both* tests that are sensitive to location differences, insensitive
> to spread differences without a corresponding location change, and
> both pick up a shape change that moves the "middle" of the data).


In fact, it can be shown (I can send details - and a preprint- to
anybody interested) that a weakness - at least in principle - of the WMW
test is that it *fails* to be a test of location, in that it may exhibit
cyclicity between three sets of data, or even consistently cyclic behaviour
between three populations as sample size -> infinity.

(A test is "cyclic" if it can imply A > B > C > A, rejecting the null
hypothesis in each case. This is stronger than "intransitivity" in which the
test implies A>B>C but fails to reject A=C.  Student's t test (with pooled
variance)  can exhibit the latter behaviour  (suppose n1 = n3 = 2, n2 = 100;
xbar1 = -2, xbar2 = 0, xbar3 = 2; and s1 =s2 =s3 = 1). but not the former,
as it can never imply mu1 > mu2 if xbar1 <= xbar2.)

The simplest example of cyclic behaviour for the WMW test uses made-up
(or large) data sets based on Efron's intransitive dice, labelled
{1,1,5,5,5,5},{3,3,3,4,4,4} and {2,2,2,2,6,6}. Details are left to the
reader.  This is Barnett's "change of shape".

Pothoff (1963) showed that WMW is a test for the median/mean between any
two symmetric distributions; and it is clear that it is a test for the
median/mean within any shifted family.

 However, (Dawson, 1997, unpublished) for a Behrens-Fisher family of
asymmetric distributions, cyclic behaviour is typically exhibited; so that a
change of shape is *not* necessary.  In particular, if f_X(x) is analytic
with all moments existing, the WMW test is a test of location for the
Behrens-Fisher family generated by f_X(x) if and only if f_X(x) is
symmetric.  For more general distributions, a necessary and sufficient
condition is that if we let f_X(x) = f1(x) + f2(x) where f1 is nonzero only
below the median (WLOG 0) and f2 only above, and gi(x) = e^x fi(e^x), then
g1 and g2 have the same autocorrelation. (Don't ask me why, I just did the
calculus & that's what it said...)

Notwithstanding all of the above, the cyclicity phenomenon is never very
strong. Using a result of Steinhaus and Trybula (1959), we can show that
even three made-up data sets cannot exhibit cyclicity for two-tailed WMW
tests at the 5% significance level unless each sample size is at least 50.
EG:

Sample12 3

X=1   19   0 0
  2   00 31
  3   0500
  4   31   0 0
  5   0019

but no smaller sample size will work.  Using random samples from populations
divided in these proportions we would of course need samle sizes larger than
50 to have this happen with any great frequency.

As a final example, consider the shifted exponential distributions as a
fairly realistic model of a Behrens-Fisher family. It can be shown that, for
random samples from three member distributions f_a, f_b, f_c chosen so that
the expected values of the pairwise WMW test statistics imply A>B>C>A for
hypothetical "locations" A,B,C,  at least one test will have a power of less
than 50% (for two-sided 5% significance level tests) unless the sample sizes
are greater than about 800. (As n -> infinity, the power of all three tests
goes to 1, of course; but it takes its time doing so!)

Thus, while the phenomenon is in one sense very widespread, it would
seem that there are few naturally occurring triples of independent data sets
for which the WMW is cyclic; and examples for which the Behrens-Fisher model
is plausible may be very few and far between.

-Robert Dawson




Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-07 Thread Glen Barnett

Alex Yu wrote:
> 
> Disadvantages of non-parametric tests:
> 
> Losing precision: Edgington (1995) asserted that when more precise
> measurements are available, it is unwise to degrade the precision by
> transforming the measurements into ranked data.

So this is an argument against rank-based nonparametric tests
rather than nonparametric tests in general. In fact, I think
you'll find Edgington highly supportive of randomization procedures,
which are nonparametric.

In fact, surprising as it may seem, a lot of the location 
information in a two sample problem is in the ranks. Where
you really start to lose information is in ignoring ordering
when it is present.
 
> Low power: Generally speaking, the statistical power of non-parametric
> tests are lower than that of their parametric counterpart except on a few
> occasions (Hodges & Lehmann, 1956; Tanizaki, 1997).

When the parametric assumptions hold, yes. e.g. if you assume normality
and the data really *are* normal. When the parametric assumptions are
violated, it isn't hard to beat the standard parametric techniques.

However, frequently that loss is remarkably small when the parametric
assumption holds exactly. In cases where they both do badly, the
parametric may outperform the nonparametric by a more substantial
margin (that is, when you should use something else anyway - for
example, a t-test outperforms a WMW when the distributions are
uniform).

> Inaccuracy in multiple violations: Non-parametric tests tend to produce
> biased results when multiple assumptions are violated (Glass, 1996;
> Zimmerman, 1998).

Sometimes you only need one violation:
Some nonparametric procedures are even more badly affected by
some forms of non-independence than their parametric equivalents.
 
> Testing distributions only: Further, non-parametric tests are criticized
> for being incapable of answering the focused question. For example, the
> WMW procedure tests whether the two distributions are different in some
> way but does not show how they differ in mean, variance, or shape. Based
> on this limitation, Johnson (1995) preferred robust procedures and data
> transformation to non-parametric tests.

But since WMW is completely insensitive to a change in spread without
a change in location, if either were possible, a rejection would 
imply that there was indeed a location difference of some kind. This
objection strikes me as strange indeed. Does Johnson not understand
what WMW is doing? Why on earth does he think that a t-test suffers
any less from these problems than WMW?
 
Similarly, a change in shape sufficient to get a rejection of a WMW
test would imply a change in location (in the sense that the "middle"
had moved, though the term 'location' becomes somewhat harder to pin
down precisely in this case).  e.g. (use a monospaced font to see this):

:. .:
::.   =>  .::
...   ...
a b   a b
 
would imply a different 'location' in some sense, which WMW will
pick up. I don't understand the problem - a t-test will also reject
in this case; it suffers from this drawback as well (i.e. they are
*both* tests that are sensitive to location differences, insensitive
to spread differences without a corresponding location change, and
both pick up a shape change that moves the "middle" of the data).

However, if such a change in shape were anticipated, simply testing
for a location difference (whether by t-test or not) would be silly. 

Nonparametric (notably rank-based) tests do have some problems,
but making progress on understanding just what they are is 
difficult when such seemingly spurious objections are thrown in.

His preference for robust procedures makes some sense, but the
preference for (presumably monotonic) transformation I would 
see as an argument for a rank-based procedure. e.g. lets say
we are in a two-sample situation, and we decide to use a t-test
after taking logs, because the data are then reasonably normal...
in that situation, the WMW procedure gives the same p-value as 
for the untransformed data. However, let's assume that the 
log-transform wasn't quite right... maybe not strong enough. When 
you finally find the "right" transformation to normality, there
you finally get an extra 5% (roughly) efficiency over the WMW you
started with. Except of course, you never know you have the right
transformation - and if the distribution the data are from are
still skewed/heavy-tailed after transformation (maybe they were
log-gamma to begin with or something), then you still may be better
off using WMW.

Do you have a full reference for Johnson? I'd like to read what
the reference actually says.

Glen



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-06 Thread Jerry Dallal

[EMAIL PROTECTED] (Robert Dawson) writes:
> Jerry Dallal wrote:
> 
>> Here's one.
>> Lack of readily available software to produce confidence intervals.
>> In some simple situations, confidence intervals for some population
>> quantities are available through the order statistics, but I don't
>> know of any readily available software that produces them.
> 
> MINITAB has commands WINT (signed-rank interval), SINT (sign interval), and
> automatically produces an interval for difference of medians with the MANN
> (-Whitney-Wilcoxon) command, as well as the hypothesis test.

Great!  Maybe it'll force other vendors to do likewise.
Thanks for the heads-up. 



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-06 Thread dennis roberts

At 10:33 AM 12/6/99 -0700, Alex Yu wrote:

>Disadvantages of non-parametric tests:


seems to me that before one lists out "dis" advantages ... or for that 
matter "ad" vantages ... one needs to be very clear on what one wants to 
know about the target population ...
now, in some cases ... there might be several approximately equal 
alternative parameters or pieces of population information that might be 
sufficient for your purposes and, in that case ... then using some 
technique with better power, etc. might be helpful. but, if we really are 
interested in some element in the population BUT, the technique for 
"inferencing" it happens to be non-parametric ... AND, it is that and ONLY 
that that we are interested in ... then we might have to "give up on some 
power" ...
--
208 Cedar Bldg., University Park, PA 16802
AC 814-863-2401Email mailto:[EMAIL PROTECTED]
WWW: http://roberts.ed.psu.edu/users/droberts/drober~1.htm
FAX: AC 814-863-1002



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-06 Thread Alex Yu


Disadvantages of non-parametric tests:

Losing precision: Edgington (1995) asserted that when more precise 
measurements are available, it is unwise to degrade the precision by 
transforming the measurements into ranked data.

Low power: Generally speaking, the statistical power of non-parametric 
tests are lower than that of their parametric counterpart except on a few 
occasions (Hodges & Lehmann, 1956; Tanizaki, 1997). 

Inaccuracy in multiple violations: Non-parametric tests tend to produce 
biased results when multiple assumptions are violated (Glass, 1996; 
Zimmerman, 1998). 

Testing distributions only: Further, non-parametric tests are criticized 
for being incapable of answering the focused question. For example, the 
WMW procedure tests whether the two distributions are different in some 
way but does not show how they differ in mean, variance, or shape. Based 
on this limitation, Johnson (1995) preferred robust procedures and data 
transformation to non-parametric tests. 

Hope it helps.


Chong-ho (Alex) Yu, Ph.D., CNE, MCSE
Instruction and Research Support
Information Technology
Arizona State University
Tempe AZ 85287-0101
Voice: (602)965-7402
Fax: (602)965-6317
Email: [EMAIL PROTECTED]
URL:http://seamonkey.ed.asu.edu/~alex/
   
  



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-06 Thread Robert Dawson

Jerry Dallal wrote:

> Here's one.
> Lack of readily available software to produce confidence intervals.
> In some simple situations, confidence intervals for some population
> quantities are available through the order statistics, but I don't
> know of any readily available software that produces them.


MINITAB has commands WINT (signed-rank interval), SINT (sign interval), and
automatically produces an interval for difference of medians with the MANN
(-Whitney-Wilcoxon) command, as well as the hypothesis test.

-Robert Dawson



Re: Disadvantage of Non-parametric vs. Parametric Test

1999-12-06 Thread Jerry Dallal

"boonlert" <[EMAIL PROTECTED]> writes:
> Dear All
> Could anyone kindly tell me a major disadvantage of using
> non-parametric test compared to parametric test?
> Your response will be appreciated.
> 

Here's one.
Lack of readily available software to produce confidence intervals.
In some simple situations, confidence intervals for some population 
quantities are available through the order statistics, but I don't
know of any readily available software that produces them.