I wrote, suggesting that forthose with a little learning the Z test is a
dangerous thing,

and Rich Ulrich responded:

> Mainly, I disagree.
> 
> I had read 3 or 4 statistic books and used several stat programs
> before I enrolled in graduate courses.  One of the *big surprises*  to
> me was to learn that some statistics were approximations,
> through-and-through, whereas others might be 'exact' in some sense.
> 
> Using z as the large sample test, in place of t, is approximate.
> Using z as the test-statistic on a dichotomy or ranks is exact, since
> the variances are known from the marginal Ns.
> Using z for *huge* N  is a desirable simplifications, now and then.
> 
> Is the 1-df chi-squared equally worthless, in your opinion?
> A lot of those exist, fundamentally, as the square of a z that
> *could*  be used instead (for example, McNemar's test).

        Sorry, Rich - you may have walked in on the middle 
of this one.  I was (and this made sense in the context) referring
solely to the Z-test-for-N>30 (which is an _unnecessary_ approximation
in this century) and the Z-test-for-known-sigma (which is a delusion in
most disciplines). I am of course not suggesting that we should use the
t test on dichotomous data - I'm hurt that you could think such a thing
<grin>.
        By the way, except for certain abstruse theoretical purposes that most
of our students will never get within a country mile of, I cannot agree
that "if N is huge then use z otherwise use t" is a simplification. "Use
t" is simpler, if you are using appropriate technology - like, say, a
good set of tables.

        -Robert Dawson


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to