The statistics used in neuroimaging are relatively simple and represent an eloquent application of the general linear model for most imaging. All of the images in fMRI are actually a map of significant and nonsignificant paired t-tests. The radio signal strength for the active and inactive conditions are compared. Voxels with significant differences are colored red. This results in a multiple comparison problem on steroids. However, the distribution of false positives across the voxel locations should be random.

The fact that the Salmon's randomly significant voxels clustered in the Salmon's brain cavity I consider extremely unlikely. What are the odds of this pattern occurring by chance? There was likely some artifact that produced this, like they moved the Salmon's head slightly at the end of every activation run,
or there was an intentional manipulation of the data.

From a random distribution of 1,000 t-tests, how many times to t-tests numbered 98,99 and 100
come up significant and all the others come up nonsignificant?

Mike Williams

On 7/12/12 1:00 AM, Teaching in the Psychological Sciences (TIPS) digest wrote:
Subject: Re: Some Problems with Neuroimaging
From: Michael Palij<[email protected]>
Date: Wed, 11 Jul 2012 21:53:29 -0400
X-Message-Number: 5

For those who are interested in reading the Margulies chapter that
Jeff cites below, most of if is accessible on books.google.com at:
http://books.google.com/books?hl=en&lr=&id=qp1NUVdlcZAC&oi=fnd&pg=PA273&dq=Margulies,+D.+S.+%282011%29+The+salmon+of+doubt:+Six+months+of+methodological++controversy+within+social+neuroscience.+&ots=FfQRwOrz7a&sig=yrKiZWtkxyQPjDeGxA1nn6piFGo#v=onepage&q&f=false

However, it is not clear from this source whether the correct "correction"
was applied or not. The issue is comparable to that of multiple comparison
testing after a significant F in an ANOVA (NOTE: I assume that the corrections
were planned before the data was collected and not after one has looked at
the data or that one is engaged in unplanned comparisons).  The
Bonferroni correction is one way to do it but there are others; see,
for example, the following:
http://jeb.sagepub.com/content/5/3/269.short

Now, I'm not familiar with what kind of voodoo, er, I mean, statistical rituals
they follow in analyzing neuroimaging data, whether they test for homogeneity
of variance, sphericity, or other conditions necessary the validity of the
statistical tests they do.  I see no argument provided for why the Bonferroni
procedure was used instead of other procedures, such as:
http://biomet.oxfordjournals.org/content/73/3/751.short
or
Multiple Comparison Methods for MeansAuthor(s): John A. Rafter,
Martha L. Abell and James P. Braselton
Source: SIAM Review, Vol. 44, No. 2 (Jun., 2002), pp. 259-278
Published by: Society for Industrial and Applied Mathematics
Stable URL:http://www.jstor.org/stable/4148355  .
NOTE: This presentation assumes that the means are independent;
within-subject designs produced correlated results and complicate
things.

So, when it comes to correcting for the number of tests one is doing,
there's more than one way to skin a cat or prepare a salmon.
And let's not even get started on the reduction of power in making
the correction.

-Mike Palij
New York University
[email protected]


---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=18975
or send a blank email to 
leave-18975-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to