Subject: Re: Some Problems with Neuroimaging From: Michael Palij
<[email protected]> Date: Thu, 12 Jul 2012 08:54:06 -0400 X-Message-Number: 2
On Wed, 11 Jul 2012 23:09:57 -0700, Mike Wiliams wrote:
However, the distribution of false positives across the voxel
locations should be random.
Depends upon how defines "random". Consider:
(1) For all t-tests, is N1=N2? I know that you say you're
using paired t-tests but what guarantee is there that there
is always a matching value? How is such missing data
treated?
There is no missing data. The whole brain scans are replicated for
active and rest phases of the study.
There is a measurement taken of signal strength for every voxel for
every whole-brain scan. I am
conducting a paired t-test for a single voxel and subtracting the mean
for the active condition from
the mean for the rest condition. The number of measurements (N) is the
same for each condition.
(2) How are violations of the assumptions of paired t-tests handled?
Variance away from the mean values are stochastic error and there is no
skew. If the distribution was
abnormal and variances unequal then there would be something wrong with
the scanner. This would
be obvious in the artifacts produced in the scans.
|The fact that the Salmon's randomly significant voxels clustered
|in the Salmon's brain cavity I consider extremely unlikely. What
|are the odds of this pattern occurring by chance?
Oh, so we're turning Bayesian now? ;-) Let's start by asking
what is the baserate?
|There was likely some artifact that produced this, like they
|moved the Salmon's head slightly at the end of every activation run,
|or there was an intentional manipulation of the data.
Uh, yeah.
|From a random distribution of 1,000 t-tests, how many times to t-tests
|numbered 98,99 and 100 come up significant and all the others come
|up nonsignificant?
I don't understand your sentence above. If you're asking what is the
overall Type I error rate for 1000 t-tests, this is given by the formula:
alpha-overall = (1 - (1- alpha-per comparison)**1000
Your formula specifies the probability of any voxel coming up
significant by chance.
Suppose you specify an alpha of .05. 5% of the voxels should be
significant by chance.
However, the significant t scores should be randomly distributed
across the voxels and all areas of the image. What are the odds of
chance activations
only in the voxels making up the Salmon's brain cavity? What are the
odds of just
voxels 4,5,&6 (the brain cavity) coming up significant and all the
others coming
up nonsignificant? The odds must be astronomical. Why were there no
chance
activations in other areas of the Salmon image? The odds of this
occurring are so
small that some kind of manipulation was conducted to produce this extremely
rare pattern.
But, if I am reading the literature correctly, the Pearson r and sample size
and not routinely reported. Nor are the power levels associated with each
test -- reducing alpha-per comparison will reduce the statistical power for
each test, thus increasing the Type II errors. So, do the corrections trade
Type I errors for Type II errors?
In other words, what are you talking about Willis?
The sample sizes are reported when the model is described. The number
of whole brain scans for each
condition in a block design is the sample size. I typically have 15
measurements for each condition for
each voxel. The effect size for the BOLD response is more-or-less
standardized. I just don't remember
what it is. The % change in signal strength associated with the BOLD
response was established very
early and it has a classic pattern of onset, peak and diminishment that
is well known and modeled in the
analyses.
Corrections are the default for the analysis software, such as SPM. You
have to actively uncorrect the
analysis if you want to see it uncorrected. It is also typical to
reduce the voxel extent for clusters.
Randomly distributed significant voxels don't usually cluster. By
specifying a minimum cluster of
5 voxels, I can usually eliminate most of the random results.
The hypotheses of neuroimaging are not at the single t-test, voxel
level. The hypothesis is typically that a
cluster of voxels, a region of interest, demonstrates a BOLD response
under the active condition. When I
administer a Naming Test, I expect a typical pattern of voxel clusters
representing the language areas of
the left hemisphere. This usually happens. The problems with fMRI are
the same problems that
hamper any research design: researchers with weak theories and
hypotheses about brain function are
essentially on a fishing expedition for fame and glory and not science.
Mike Williams
---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=19001
or send a blank email to
leave-19001-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu