On 19 Apr 2001, Paul Swank wrote:

> I agree. I normally start inference by using the binomial and then
> then the normal approximation to the binomial for large n. It might be
> best to begin all graduate students with nonparametric statistics
> followed by linear models. Then we could get them to where they can do
> something interesting without taking four courses.
> 
> 
> 
> At 01:28 PM 4/19/01 -0500, you wrote:
> 
>>Why not introduce hypothesis testing in a binomial setting where there are
>>no nuisance parameters and p-values, power, alpha, beta,... may be obtained
>>easily and exactly from the Binomial distribution?
>>
>>Jon Cryer


I concur with Jon and Paul.  (I'll refrain from making a crack about
Ringo.)  When I was an undergrad, the approach was z-test, t-test, ANOVA,
simple linear regression, and if there was time, a bit on tests for
categorical data (chi-squares) and rank-based tests.  I got great marks,
but came away with very little understanding of the logic of hypothesis
testing.

The stats class in 1st year grad school (psychology again) was different,
and it was there that I first started to feel like I was achieving some
understanding.  The first major chunk of the course was all about simple
rules of probability, and how we could use them to generate discrete
distributions, like the binomial.  Then, with a good understanding of
where the numbers came from, and with some understanding of conditional
probability etc, we went on to hypothesis testing in that context.  One
thing I found particularly beneficial was that we started with the case
where the sampling distribution could be specified under both the null and
alternative hypotheses.  This allowed us to calculate the likelihood
ratio, and to use a decision rule to minimize the overall probability of
error.  We could also talk about alpha, beta, and power in this simple
context.  Then we moved on to the more common case where the distribution
cannot be specified under the alternative hypothesis, and came up with a
different decision rule--i.e., one that controlled the level of alpha.  
The other thing I found useful was that all of this was without reference
to any of the standard statistical tests--although we found out that the
sign test was the same thing when we did get to our first test with a
proper name.  We followed that with the Wilcoxon signed ranks test and
Mann-Whitney U before ever getting to z- and t-tests.  By the time we got
to these, we already had a good understanding of the logic:  Calculate a
statistic, and see where it lies in its sampling distribution under a true
null hypothesis.

An undergrad text that takes a similar approach (in terms of order of
topics) is Understanding Statistics in the Behavioral Sciences, by Robert
R. Pagano.  Not only is the ordering of topics good, but the explanations
are generally quite clear.  I would certainly use Pagano's book again (and
supplement certain sections with my own notes) for a psych-stats class.

-- 
Bruce Weaver
New e-mail: [EMAIL PROTECTED] (formerly [EMAIL PROTECTED]) 
Homepage:   http://www.angelfire.com/wv/bwhomedir/



=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to