- I have a comment on an offhand remark of Glen's, at the start of
his interesting posting -

On Tue, 07 Dec 1999 15:58:11 +1100, Glen Barnett

> Alex Yu wrote:
> > 
> > Disadvantages of non-parametric tests:
> > 
> > Losing precision: Edgington (1995) asserted that when more precise
> > measurements are available, it is unwise to degrade the precision by
> > transforming the measurements into ranked data.
> So this is an argument against rank-based nonparametric tests
> rather than nonparametric tests in general. In fact, I think
> you'll find Edgington highly supportive of randomization procedures,
> which are nonparametric.
 - In my vocabulary, these days, "nonparametric"  starts out with data
being ranked, or otherwise being placed into categories -- it is the
infinite parameters involved in that sort of non-reversible re-scoring
which earns the label, nonparametric.  (I am still trying to get my
definition to be complete and concise.)

I know that when *nonparametric*  and  *distribution-free*  were the
two alternatives to ANOVAs, either of the two labels was slapped onto
people's pet procedures, fairly  indiscriminately;  and a lack of
discrimination seems to have widened to encompass  *robust*,  later
on.  Okay, I see that exact evaluation by randomization of a fixed
sample does not use a t or F distribution for its p-levels.   Okay, I
see that it is not ANOVA.   But, I'm sorry,  I don't regard a test as
nonparametric which *does*  preserve and use the original metric and
means.  Comparison of means is parametric, and that contrasts to

Similarly, bootstrapping is a method of "robust variance estimation"
but it does not change the metric like a power transformation does, or
abandon the metric like a rank-order transformation does.  If it were
proper  terminology to say randomization is nonparametric, you would
probably want to say bootstrapping is nonparametric, too.  (I think
some people have done so; but it is not widespread.)


Reply via email to