Stephen Black wrote:
> I've never taught statistics, another thing I'm thankful for
> (see my previous comment on pregnancy).
>
> But there are three specific, widely neglected topics that I wish
> people who do teach statistics would give some attention to.
> Especially when their treatment supports my own views on these
> topics, as follows:
>
> 1) Planned vs post-hoc "shotgun" approaches to pairwise
> comparisons.
>
> We should be encouraging students to consider beforehand
> the set of comparisons which are necessary and sufficient to
> extract all needed information from an experiment, and then to
> test only those comparisons (the planned as opposed to the
> post-hoc "shotgun" approach).
Oh gosh - where to begin? I don't think these topics are totally neglected.
Rather than trust my own words, let me quote from an old paper I have in my
files. It is Daryl Bem "Writing the Empirical Journal Article in Psychology:
A Checklist of Accumulated Wisdom" (Practicum in article writing, Spring
1976, Stanford University). He writes:
"J. Analyze your results. Simple enough, but most beginners do not really
analyze them. Look at them from every angle. Yes, go on a fishing
expedition. Tey different combinations, different classifications, try
dropping some subjects you don't like, look only at subjects you ran
personally, etc. Immoral? Nonsense! The rules of inferential statistics,
those guardians against Type I errors, tell you what you can conclude in the
final written product. But you spent time, money, blood, sweat, and tears to
gather those data; they are yours to play with, to live with, to take apart
and put back together again. Learn from them in any way you can. DO NOT
CONFUSE THE CONTEXT OF DISCOVERY - FOR WHICH THERE ARE NO RULES - WITH THE
CONTEXT OF JUSTIFICATION - which is what all those rules you have learned
for 23 years have been about. Anybody with your IQ can learn to be a
devastating critic, an obsessive, analytic perfectionist about methodology
and statistical inference. And we, the faculty, know how to teach those
skills too -- with a vengeance. But how to smell (or even have) a good idea
about human behavior? Alas, only heuristics. One of these: Live and play
with your data!" [EMPHASIS MINE, not Bem's].
I agree that focused comparisons are better (for the planning of the study
and the final product). In fact, Rosenthal, Rosnow and Rubin have just
published a new book on focused and planned comparisons (Cambridge U Press,
2000 - I forgot the title even though I'm reading it). Planned comparisions
force you to _think_ about what you are doing. They clarify your hypotheses.
You have a greater chance of getting significant results if you pattern
statistical procedures to your situation. Where I object with Stephen is the
implication that students should do _only_ those tests they planned. After
the "proper" tests are done, play with the data! Don't let the possibility
of a Type I error stifle your creative juices. The best guardian against
error is replication. If the probability of incorrectly rejecting null is
(1/10) the probability of _two_ independent studies rejecting incorrectly is
(1/10)^2. I agree we should teach people how to do focused and planned
comparisons, but this should be the beginning of dta analysis, not the end.
(I haven't yet got to points 2 and 3 .. maybe later .. have to run).
--
---------------------------------------------------------------
John W. Kulig [EMAIL PROTECTED]
Department of Psychology http://oz.plymouth.edu/~kulig
Plymouth State College tel: (603) 535-2468
Plymouth NH USA 03264 fax: (603) 535-2412
---------------------------------------------------------------
"What a man often sees he does not wonder at, although he knows
not why it happens; if something occurs which he has not seen before,
he thinks it is a marvel" - Cicero.