Hi,

The 'regular' Bonferroni is used when you wish to test an overall hypothesis by 
combining the 
results of testing individual hypotheses. So in the case of a single 
correlogram, the overall (null) 
hypothesis is 'no correlation at any distance scale', and the individual 
hypotheses are 'no 
correlation at distance scale 1,' no correlation at distance scale 2,' and so 
on.

In the case of your six correlograms, the answer simply depends on what your 
larger hypothesis 
is. If it is 'no correlation at ANY distance scale in ANY of the plots,' then 
it is conceptually 
appropriate to correct for 60 separate tests. In theory this will give you the 
same error rate as 
doing a Bonferroni correction for each correlogram separately, and then a 
correction for 
comparing those results across your six plots.

Note though that with a large number of subtests, you will run into 
difficulties. Your corrected 
threshold for significance will be very high (alpha / 60, or about 0.0008 if 
alpha = 0.05). It's 
possible, for example, that the sample size for each subtest (one distance 
class in one plot) may 
not be sufficient to achive this level of significance, not matter how 
aggregated the canopy. Thus, 
you might never achieve a significant test.

That's why in general, the aggregation of MANY small sub-tests into a larger 
test is not a good 
idea. It's better to find a statistic that allows an overall test, or perhaps 
lump your data together.

So, what if you lump your five undamaged plots into one correlogram, and test 
just the ten 
distance classes (using the Bonferroni correction)? You might indeed find 
significance now, as your 
sample size per class will be five times bigger. For the single, damaged plot, 
you will have less 
data, and you may need to do the uneven distance classes. Even so, you can  
test that one with 
Bonferroni correction. All things being equal, you are less likely to find 
significance than with the 
other data because of your smaller sample size, but you still might. The bottom 
line is that the 
fact that you have more data for one test than the other reflects reality, and 
can't be corrected for 
except by collecting more. But all is not lost, because what's ECOLOGICALLY 
interesting is the 
effect size. Your undamaged plots may show statistically significant 
autocorrelation, but of very 
small magnitude, which makes it perhaps not interesting. Your damaged plot may 
have high 
autocorrelation values, but not be 'significant' merely because of low sample 
size. You should 
always report both the statistical tests and the effect sizes.

Gareth Russell

Reply via email to