Thank you. It was the Simes procedure I was thinking of! I learned it in 
graduate 
school; of course that was well over 20 years ago now so it has probably been 
supplanted. Thank you for the memory job. It is described in great detail with 
proper citations in the text by Rand Wilcox (a text that only a statistician 
who 
routinely speaks in greek symbols would love).

Annette

Annette Kujawski Taylor, Ph.D.
Professor of Psychology
University of San Diego
5998 Alcala Park
San Diego, CA 92110
619-260-4006
[email protected]

---- Original message ----
>Date: Thu, 8 Jan 2009 13:45:44 +0100
>From: "Rainer Scheuchenpflug" <[email protected]
wuerzburg.de>  
>Subject: Re:[tips] Cross-cultural scientific screw-up, alpha-correction and 
tip-of-the-tongue  
>To: "Teaching in the Psychological Sciences (TIPS)" 
<[email protected]>
>
>Hi Annette,
>
>the name of the procedure you described is "Holm's method", sometimes also
>referred to as "Bonferroni-Holm"-correction.
>
>There also exists a sequential procedure which starts out with the largest
>values of p and tests whether comparisons are not significant, called
>Simes-Hochberg Method, which is even less popular than Bonferroni-Holm.
>
>Regards, 
>Rainer
>
>
>Dr. Rainer Scheuchenpflug
>Lehrstuhl fuer Psychologie III
>Roentgenring 11
>97070 Wuerzburg
>Tel:   0931-312185
>Fax:   0931-312616
>Mail:  [email protected]
>
>
>
>
>------------------------------------------------------------
----------------
>-----------------------------
>Subject: Re: Cross-cultural scientific screw-up, big-time
>From: <[email protected]>
>Date: Wed,  7 Jan 2009 15:11:09 -0800 (PST)
>X-Message-Number: 31
>
>There are various bonferroni procedures you can use if you google them. In
>one such procedure (darn, the name escapes me!) you simply do the number 
of
>post-hoc tests you want as t-tests and then rank order by p-values. You then
>divide alpha by the total number of comparisons and multiple times the rank
>order for the critical p. As soon as you fair to exceed critical p you stop
>and nothing else is considered significant.
>
>For example, let's say you are interested in three specific comparisons, you
>do the t-tests and get the following p-values: .010, .040, .045.
>
>If .05 is normally the accepted critical p-value and it is the one you want
>to use, then you would use the three critical values for comparison to the
>obtained p-values as (.05/3)*1 = .017. OK, .010 is less than that so the
>first comparison is considered significant. Next you'd go to (.05/3)*2 =
>.033 and since you obtained .040 you now reject that one all subsequent
>comparisons are nonsigificiant. So you don't need the last comparison, which
>would have given you a comparison of .05. So by controlling for the
>increased probabiilty of incorrectly finding a significant difference where
>it is not likely to exist you have now rejected 2 out of the 3 comparisons
>that you might otherwise have accepted.
>
>There really is a name for this procedure but I'm having an old-timer's
>moment....it will come to me eventually.
>
>Of course, all of this presumes you are wedded to the theoretical ideas that
>underlie traditional significance testing.
>
>Annette
>
>Annette Kujawski Taylor, Ph.D.
>Professor of Psychology
>University of San Diego
>5998 Alcala Park
>San Diego, CA 92110
>619-260-4006
>[email protected]
>
>
>
>---
>To make changes to your subscription contact:
>
>Bill Southerly ([email protected])


---
To make changes to your subscription contact:

Bill Southerly ([email protected])

Reply via email to