Not being a statistical mathematician, my question may appear somewhat
naive, but here goes..

Does anyone know of a procedure for adjusting the magnitude of a
Bonferroni correction (BC) for the average correlation among the
multiple measures one is performing the statistical tests upon.
Typically, when people use the BC they simply base the correction on
the total number of statistical tests being don. This does not seem to
make sense if the measures are correlated.

For example, if two groups are being compared with 20 t-tests on
measures that are uncorrelated with one another, then it makes sense
to adjust alpha for the 20 independent tests being performed. At the
other extreme, if all 20 criterion variables are perfectly correlated
with one another, then the 20 tests will all come out exactly the same
-- equivalent to single t-test, and it would make no sense to increase
alpha as if the conjoint probability of 20 independent events were
being calculated. Most analyses that I see published (including my
own) fall somewhere in between -- 20 t-tests are done on measures that
are known to be moderately correlated with one another. The author may
start with an overall test of significance (such as Hotelling's T from
a MANOVA), and then follow a significant overall effect with the 20
t-tests. The author then typically does one of the following: (a)
calculates the p-values adjusted for all 20 tests, (b) does not adjust
the t-tests at all, considering the overall test of significance
sufficient, or (c) something in between, such as "alpha= .01 was used,
in view of the large number of tests performed".

I know there are some interesting alternatives (such as Hochberg's
test that sequentially adjusts alpha as the number of measures are
increased one at a time) -- but this does not really address my
question either: the effect of intercorrelation among measures. I
imagine doing something like a principal components analysis to
identify the "actual" number of underlying orthogonal factors that are
present in ones data and then using that number for the BC. For
example, if 6 principal components account for 95% of the variability
in ones measures, then do a BC as if 6 independent t-test are being
done -- since that is the best estimate of the number of independent
criteria actually present.

Perhaps a simpler way might be to adjust the BC for the average
correlation among all measures (related to Cronbach's alpha).

The problem is, I have never seen any indication in the literature of
people trying to deal with this (including lit searches I've done on
the topic). Is my logic flawed, or have I just not looked in the right
places. As one who enjoys statistics, but is not an expert in the
mathematical core of it all, I would be very interested in hearing
people's thoughts on this.

Thank you much.

--
*************************************************
John H. Poole, Ph.D.
Department of Psychiatry
University of California Medical Center
4150 Clement Street (116C)
San Francisco, CA 94061, USA

Phone: 650-281-8851   Fax: 415-750-6996
Email: [EMAIL PROTECTED]; [EMAIL PROTECTED]
*************************************************


.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to