On Wed, 11 Jul 2012 23:09:57 -0700, Mike Wiliams wrote:
>The statistics used in neuroimaging are relatively simple and
>represent an eloquent application of the general linear model
>for most imaging. All of the images in fMRI are actually a map
>of significant and nonsignificant paired t-tests. The radio signal
>strength for the active and inactive conditions are compared.
>Voxels with significant differences are colored red. This results
>in a multiple comparison problem on steroids. However, the
>distribution of false positives across the voxel locations should be random.

Depends upon how defines "random".  Consider:
(1) For all t-tests, is N1=N2?  I know that you say you're
using paired t-tests but what guarantee is there that there
is always a matching value? How is such missing data
treated?  Consider the following:
Comparing Means in the Paired Case with Missing Data on One Response
Gunnar Ekbohm
Biometrika , Vol. 63, No. 1 (Apr., 1976), pp. 169-172
Published by: Biometrika TrustExternal Link
Article Stable URL: http://www.jstor.org/stable/2335098

(2)  How are violations of the assumptions of paired t-tests handled?
For a review of this issue, see:
Simple Robust Tests for Scale Differences in Paired Data
Patricia M. Grambsch
Biometrika , Vol. 81, No. 2 (Jun., 1994), pp. 359-372
Published by: Biometrika TrustExternal Link
Article Stable URL: http://www.jstor.org/stable/2336966

|The fact that the Salmon's randomly significant voxels clustered
|in the Salmon's brain cavity I consider extremely unlikely. What
|are the odds of this pattern occurring by chance?

Oh, so we're turning Bayesian now? ;-)  Let's start by asking
what is the baserate?

|There was likely some artifact that produced this, like they
|moved the Salmon's head slightly at the end of every activation run,
|or there was an intentional manipulation of the data.

Uh, yeah.

|From a random distribution of 1,000 t-tests, how many times to t-tests
|numbered 98,99 and 100 come up significant and all the others come
|up nonsignificant?

I don't understand your sentence above.  If you're asking what is the
overall Type I error rate for 1000 t-tests, this is given by the formula:

alpha-overall = (1 - (1- alpha-per comparison)**1000

where alpha-per comparison is the alpha for each t-test
** means raised to the power

So, if we use per-comparison alpha = .05, we have
(1-(.95)**1000 = approximately 1.00, that is, there is a 100%
chance that a Type I error has been committed.  This would
correspond, I believe, to the "uncorrected" tests some
researchers do.

If we use the Bonferroni correction to keep alpha-overall = .05,
then alpha-per comparison using the SISA calculator is (see:
http://www.quantitativeskills.com/sisa/calculations/bonfer.php?Alpha=0.05&N=1000&Corr=0.00&Df=00
)
Bonferroni's adjustment:
Lower the 0.05 to 5.0E-5 (NOTE: alpha-per comparison for each test= .00005)
z-val for 1 sided testing: >= 3.8906
z-val for 2 sided testing: >= 4.0556

The Dunn-Sidak adjustment doesn't do much to improve things:
Sidak's adjustment, for each test:
Lower the 0.05 to 5.13E-5
z-val for 1 sided testing: >= 3.8844
z-val for 2 sided testing: >= 4.0496

NOTE: what is missing from the calculation is (a) inclusion of the
correlation between the paired values and (b) the sample size.
If we assume r = .90, things get a little better;
Bonferroni's adjustment:
Lower the 0.05 to 0.0250594
z-val for 1 sided testing: >= 1.9589
z-val for 2 sided testing: >= 2.2405

Sidak's adjustment, for each test:
Lower the 0.05 to 0.0253799
z-val for 1 sided testing: >= 1.9535
z-val for 2 sided testing: >= 2.2356

But, if I am reading the literature correctly, the Pearson r and sample size
and not routinely reported.  Nor are the power levels associated with each
test -- reducing alpha-per comparison will reduce the statistical power for
each test, thus increasing the Type II errors. So, do the corrections trade
Type I errors for Type II errors?

In other words, what are you talking about Willis?

-Mike Palij
New York University
[email protected]



On 7/12/12 1:00 AM, Mike Palij previously wrote:
    For those who are interested in reading the Margulies chapter that
    Jeff cites below, most of if is accessible on books.google.com at:
    
http://books.google.com/books?hl=en&lr=&id=qp1NUVdlcZAC&oi=fnd&pg=PA273&dq=Margulies,+D.+S.+%282011%29+The+salmon+of+doubt:+Six+months+of+methodological++controversy+within+social+neuroscience.+&ots=FfQRwOrz7a&sig=yrKiZWtkxyQPjDeGxA1nn6piFGo#v=onepage&q&f=false

    However, it is not clear from this source whether the correct "correction"
    was applied or not. The issue is comparable to that of multiple comparison
    testing after a significant F in an ANOVA (NOTE: I assume that the
corrections
    were planned before the data was collected and not after one has looked at
    the data or that one is engaged in unplanned comparisons).  The
    Bonferroni correction is one way to do it but there are others; see,
    for example, the following:
    http://jeb.sagepub.com/content/5/3/269.short

    Now, I'm not familiar with what kind of voodoo, er, I mean,
statistical rituals
    they follow in analyzing neuroimaging data, whether they test for
homogeneity
    of variance, sphericity, or other conditions necessary the validity of the
    statistical tests they do.  I see no argument provided for why the
Bonferroni
    procedure was used instead of other procedures, such as:
    http://biomet.oxfordjournals.org/content/73/3/751.short
    or
    Multiple Comparison Methods for MeansAuthor(s): John A. Rafter,
    Martha L. Abell and James P. Braselton
    Source: SIAM Review, Vol. 44, No. 2 (Jun., 2002), pp. 259-278
    Published by: Society for Industrial and Applied Mathematics
    Stable URL:http://www.jstor.org/stable/4148355  .
    NOTE: This presentation assumes that the means are independent;
    within-subject designs produced correlated results and complicate
    things.

    So, when it comes to correcting for the number of tests one is doing,
    there's more than one way to skin a cat or prepare a salmon.
    And let's not even get started on the reduction of power in making
    the correction.

---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=18981
or send a blank email to 
leave-18981-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to