> I just received a review which stated that statistics should not be taught
> by the use of rules. For example a rule might  be: "if you wish to infer
> about the central tendency of a non-normal but continuous population using
> a small random sample, then use nonparametrics methods."
>
> I see why rules might not be appropriate in mathematical statistics
> classes where everything is developed by theory and proof. However I teach
> statistical methods classes to business students.
>
> It is my belief that if faculty do not give rules in methods classes, then
> students will infer the rules from the presentation. These
> student-developed rules may or may not be valid.

    Exactly...  An example - we've been using Devore & Peck, which
unfortunately introduces the Z test for the mean, supposedly for pedagogical
reasons but without nearly a strong enough indication of this. A lot of
students infer a rule "if n>30 use z rather than t" despite my repeated
statements that Z is NEVER a better test for the mean under circumstances
they are likely to encounter [in psychology]. Of course, if they are cutting
lectures that day they won't hear the warning...

    First off, users of statistics can't *avoid* rules unless we select
methods stochastically, or insist that our intuition is more reliable than
our reason. (*That* way madness lies...)  If we predictably do the same
thing in equivalent situations, then we are following rules.  We may use
rules that are too complicated to teach to first-year students, or that
involve trying several things and comparing outcomes - but it's still rules.
The question is, what rules do we teach, and how do we teach them?

      Under ideal circumstances students in 'methods' courses can *verify*
rules, or be led to "reinvent" them.  The same goes, rather more so, for
'theory' courses. But when you look at the number of damfool^H^H^H^H^H^H^H
less-than-optimal things that even respectable statisticians have believed
in the past ["F before T", anybody?] it would be less than responsible to
encourage each new generation of researchers to shoot themselves in the foot
with the same old blunderbuss, and go off into the self-styled real world
doing so over and over again till they retire.

    The reasons for some rules are deep.  If we think one method is
preferable to another because we (or somebody whose paper we once read a
summary of) determined by a complicated simulation that the first method is
more robust under certain breaches of homoscedasticity, what do we do with a
class who can't do simulations & can't spell homoskedasticity?

    Do we assume that they don't *deserve* to know the preferred method?
("We call it Student's T because that's the only one we let students use;
the others are reserved for faculty only.")

    Do we asume that they can't be *expected* to know the preferred method?
("Awww, cut the kid some slack, she's only a <insert your favorite
discipline>")

    Do we assume that there *isn't* a right method? (Editor to referee:
"Yes, I know he did a Student's T test to compare two proportions, but it's
part of the policy of this journal that everybody has their own special and
valid unique way of expressing their feelings.")

    Or do we teach the best techniques we can within the limitations of the
course?

    -Robert Dawson

Reply via email to