Hi

Thanks to Karl for making this available ... now for a somewhat alternative 
perspective from a non-statistician.

1.  I start with the following quote from Ryan which concerns the distinction 
between a priori and a posteriori comparisons.  He appears to believe the 
distinction is a false one.

"There is no justification whatever for the notion that planning allows
us to use uncorrected t tests. This notion is perpetrated in a number of
textbooks but never given any logical justification. It is simply stated
that it is "self evident." It is a dangerous notion, since those
who want significance at all costs can always claim they planned their
tests in advance. Whether they did or not is actually irrelevant."

But is the distinction really without a rationale?  Using a quasi- (pseudo?) 
bayesian analogy, would not a planned comparison based on previous findings or 
well-founded theory be akin to setting the prior probability, and would not 
that mean that you need less evidence from the present study to conclude in 
favor of Ha?  That is, a more liberal test is justified.  Or to use a 
perceptual analogy, if you have reason to expect the presence of some object, 
you require less bottom-up perceptual input to detect its presence.

2. Continuing along this line of thinking, the decision about what multiple 
comparison procedure to use is essentially about how strong the evidence needs 
to be before you will conclude a difference (probably) exists.  But in practice 
this appears a far less precise sort of judgment than the perhaps idealized 
concerns of mathematical statisticians, simulations, and the like.  I just do 
not see that our judgment about how conservative to be is so precise that we 
are likely to be ill served by requiring the omnibus F to be significant even 
though it is not strictly speaking required, assuming of course that we want to 
be conservative (e.g., when we really have no prior rationale for a more 
sensitive, liberal test or when cost of a type I error is high).

Take care
Jim

James M. Clark
Professor of Psychology
204-786-9757
204-774-4134 Fax
[EMAIL PROTECTED]

>>> "Wuensch, Karl L" <[EMAIL PROTECTED]> 03-Apr-07 11:18:47 PM >>>
Hi Rick,
 
    You have motivated me to create a page with comments on this issue
from a number of well-respected statisticians, including T. A. Ryan.
While all psychologists (and others) who conduct pairwise contrasts
should read this, I fear that only those following this thread will --
and they are doubtlessly a rather unusual and small group.  Oh well.
Here is the url:
http://core.ecu.edu/psyc/wuenschk/StatHelp/Pairwise.htm .  It is too
late in the day to make the page pretty, but I hope you find the
discussion interesting.  If you wish to cite an authority on this, cite
Ryan's 1959 Psych. Bull. article.  Yikes, psychologists should have
known about this since 1959, but most are still in the dark.
 
    It really is a shame that most people who write introductory
statistics texts for psychology don't know much about the topic.  I
wonder why that is.
 
Cheers,
 
Karl W.

________________________________

From: Rick Froman [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 03, 2007 10:18 PM
To: Teaching in the Psychological Sciences (TIPS)
Subject: RE: [tips] ANOVA, HSD, and LSD


Not that I doubt you Karl (you seem very educated on statistical issues)
but all the textbooks I have used talk about HSD as a post hoc test that
is only appropriate to use after finding significance with an ANOVA. Do
you have something I could reference to support this? 


---
To make changes to your subscription go to:
http://acsun.frostburg.edu/cgi-bin/lyris.pl?enter=tips&text_mode=0&lang=english 



---
To make changes to your subscription go to:
http://acsun.frostburg.edu/cgi-bin/lyris.pl?enter=tips&text_mode=0&lang=english

Reply via email to