Group analysis for searchlight results is unfortunately not
straightforward or agreed-upon. But a few thoughts, anyway.
First, I would not do small-volume corrections. This is mixing
whole-brain (the searchlight) and ROI-based analyses/hypotheses. If you
have a ROI-based hypothesis you should do ROI-based analyses (test the
region directly); otherwise it's too easy to draw the ROIs around the
blobs and create positive results out of anything.
Smoothing the single-subject maps then doing a 'normal' mass-univariate
analysis in spm is a safer strategy, though as you point out,
information maps are definitely not activation maps. I'd suggest trying
something like whole-brain FDR or FWE with a reasonable cluster size
threshold. You might consider thresholding at p < .1 or something if p <
.05 is too restrictive; justifiable in my opinion, given how different
searchlight data is from 'normal' fMRI data.
Given how few subjects you have, I'd also present the single-subject
maps; obvious results in each subject makes the group results convincing
even if the group-level p-values are less significant than you might hope.
good luck,
Jo
On 7/5/2012 12:21 PM, Mike E. Klein wrote:
Hi everyone,
I'm attempting to threshold group data for a searchlight-based MVPA . I
am performing the group-wise stats via a standard top-level analysis in
SPM (using single-subject searchlight accuracy maps as inputs). I am
having difficulty figuring out where to set significance thresholds. SPM
is using a purely random-effects calculation on the data (n=9, so df=8),
leading to enormous t-thresholds (~16), which are impossible to reach
and seem way too conservative. If I do small-volume corrections on our
a-priori regions of interest, the effects are significant (but not by
much...t-thresholds in the 8s), so this seems less than ideal (and
ignores most of the brain).
Typically, for GLM analyses, we use a "mixed effects" model, which
incorporates both within- and between-subjects statistics, yielding an
"effective" degree of freedom (which is much higher than the number of
study participants, though much lower than the total number of trials in
the experiment). However, I am not sure (a) how to calculate this for an
MVPA study or (b) if the same set of assumptions hold. Our nine subjects
each underwent 9 functional runs (used for 8 -> 1 leave one out cross
validation). So each subjects searchlight map was reflecting an average
of these nine folds. We used 81 total examples per condition (9 per
run), which were temporally averaged, leaving 27 examples per condition
that were fed into an SVM. Single-subject results were warped into
standard space and also explicitly smoothed with a 7mm gaussian kernel,
before being fed into SPM.
We have strong results, so really I'm looking for the "most proper" way
to perform searchlight group significance testing. Because we're doing
35,000-45,000 spheres per subject, I don't think permutation testing is
feasible. There's also the option of reporting p<0.05 FWE stats for the
pre-defined ROIs, and p<0.001 (uncorrected) for the rest of the brain,
for completeness sake.
Any advice is greatly appreciated!
Best,
Mike
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
--
Joset A. Etzel, Ph.D.
Research Analyst
Cognitive Control & Psychopathology Lab
Washington University in St. Louis
http://mvpa.blogspot.com/
_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
[email protected]
http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa