I think that the distinction between planned and not planned
comparisons is silly.  What is to stop one from planning on comparing
each mean with each other mean and each combination of means with the
remaining means and so on?  I don't think that the Type I boogie man
under the bed gives a damn whether or not you planned your comparisons.

        That said, I think that the use of procedures that reduce the
per comparison alpha for purposes of capping familywise error rate at an
unreasonably low value (like .05) have caused more harm than good for
research in psychology.  These procedures can drastically increase the
probability of a Type II error.  The marginal probability of a Type II
error is already enormously greater than the marginal probability of a
Type I error (which many argue is zero, at least with continuous
variables).  Does it really make sense to increase the probability of
the more likely error to guard against the error that is highly unlikely
to start with?

Cheers,
 
Karl W.

-----Original Message-----
From: Mike Palij [mailto:[email protected]] 
Sent: Friday, January 09, 2009 10:54 AM
To: Teaching in the Psychological Sciences (TIPS)
Cc: Mike Palij
Subject: RE: [tips] ANOVA question (was cross-cultural)

On Thu, 08 Jan 2009 20:04:23 -0800, Karl Wuensch wrote:
>        I'm even less conservative than Stephen.  I would not apply the
>Bonferroni adjustment.  After all, these are PLANNED comparisons, eh?

This is a curious point:

Why should the state of knowledge (i.e., able to predict the size
of difference, the direction of a difference, etc.) affect the
probability
of making an error of inference?

---
To make changes to your subscription contact:

Bill Southerly ([email protected])

Reply via email to