[EMAIL PROTECTED] (Radford Neal) wrote:

>[ snip, baseball game; etc. ] 
>> In this context, all that matters is that there is a difference.  As
>> explained in many previous posts by myself and others, it is NOT
>> appropriate in this context to do a significance test, and ignore the
>> difference if you can't reject the null hypothesis of no difference in
>> the populations from which these people were drawn (whatever one might
>> think those populations are).

Rich Ulrich  <[EMAIL PROTECTED]> wrote:

>So far as I remember, you are the only person who imagined that
>procedure,  "do a test and ignore ... if you can't reject...."  Oh,
>maybe Jim, too.

None of you said it explicitly, because none of you made any coherent
exposition of what should be done.  I had to infer a procedure which
would make sense of the argument that a significance test should have
been done.

NOW, however, you proceed to explicitly say exactly what you claim not
to be saying:

>I know that I was explicit in saying otherwise.  I said something
>like,  If your data aren't good enough so you can quantify this mean
>difference with a t-test, you probably should not be offering means as
>evidence. 

In other words, if you can't reject the hypothesis that the
performance of male and female faculty differs in some population from
which the actual faculty were supposedly drawn, then you should ignore
the difference in performance seen with the actual faculty, even
though this difference would - by standard statistical methodology
explained in any elementary statistics book - result in a higher
standard error for the estimate of the gender effect, possibly
undermining the claim of discrimination.

> And,  Many of us statisticians find tests to be useful,
>even when they are not wholly valid.  

It is NOT standard statistical methodology to test the significance of
correlations between predictors in a regression setting, and to then
pretend that these correlations are zero if you can't reject the null.

>As evidence, I pointed to the
>(over-) acceptance of observational studies in epidemiology.  I think
>I made those arguments at least two or three times, each.

Whatever sins of epidemiologists you may have in mind here are
irrelevant to the question at hand.

>As it turns out, the big gap in the "scores" makes those averages
>dubious, even though a t-test *is*  nominally significant.  
>(That's so, when computed on X  or on log(X),  but not so, on  1/X.)

So the bigger the performance differences, the less attention should
be paid to them?  Strange...

>And then, as I later discovered, the arguments and the 
>style of the original report make Jim's criticism tenuous.  
>Even if you were to illustrate how all the males have 
>out-achieved all the females, by one criterion or by several 
>criteria, you would not discredit the decision of the dean ---
>Wasn't  the report was talking more about 
>'what all our faculty deserve'  instead of what's earned by
>individuals?  You guys have skipped that half.

Well of course.  If the dean thinks that all faculty should be treated
equally, regardless of performance, then one certainly cannot argue
against this position on any sort of statistical grounds.  But why
would such a decision be characterized as having anything to do with
gender discrimination, if it wasn't based on the belief that gender
discrimination exists?

   Radford Neal

----------------------------------------------------------------------------
Radford M. Neal                                       [EMAIL PROTECTED]
Dept. of Statistics and Dept. of Computer Science [EMAIL PROTECTED]
University of Toronto                     http://www.cs.utoronto.ca/~radford
----------------------------------------------------------------------------


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to