Hi caret-users,

I want to bring your attention to a new mailing list devoted to neuroimaging multiple comparisons and thresholding issues (neuro-mult-comp):

http://brainvis.wustl.edu/mailman/listinfo/neuro-mult-comp

Rather than cross-post to multiple tool-centric lists (e.g., AFNI, caret-users, Freesurfer, FSL, SPM), neuroimaging researchers can discuss multiple comparisons and thresholding problems and issues on this algorithm-centric list.

Here is an example of a question I'd like to ask on this list, but it currently has only two members (Donald McLaren and I), but I know of some caret-users who might have some input:

Russ Poldrack successfully pushed me into investigating variance smoothing, which does look quite interesting. Nichols & Holmes explain this idea in the pseudo t-statistics section of their primer paper:

The "Primer Paper",
TE Nichols and APHolmes.
Nonparametric Permutation Tests for Functional Neuroimaging: A Primer with 
Examples.
Human Brain Mapping, 15:1-25, 2002.
http://www.fil.ion.ucl.ac.uk/spm/doc/papers/NicholsHolmes.pdf


I have tried it on some of my anatomical data, and it made a substantial 
difference (see attached captures).

So the questions are:

* How much to smooth?
* Which algorithm to use?

On page 19 of the primer paper, Nichols & Holmes say, "We used a variance smoothing 
of 4 mm FWHM, comparable to the original within subject smoothing. In our experience, the use 
of any variance smoothing is more important than the particular magnitude (FWHM) of the 
smoothing."

I'm inclined to agree, so I was happy to try something really nominal, using 
the Gaussian algorithm, but I realized I don't know how to map FWHM to our 
parameters, which are like the ones that define the ellipsoid in the gaussian 
mapping algorithm.  Since David was busy, I looked at Joern Diedrichsen's 
paper, to see what he used.  His Caret Surface Statistics paper 
(http://www.bme.jhu.edu/~jdiedric/download/Caret_surface_statistics.pdf) says
"In my experience this value is reached by smoothing in caret for 4 iterations with 
strength of 0.5."  This is the average neightbors algorithm, I think, so I try this, 
which gives the result in the captures.

Joern's functional data is nothing like my anatomical data, but I just wanted 
to see if it made any difference.  Now that I know it does, I want to be more 
principled about how much and how to smooth.

But there is a practical consideration with the algorithm, too.  We use a 
permutation strategy to determine significance (surface-based equivalent of the 
suprathreshold cluster test).  Applying a gaussian smoothing to each iteration 
t-map would slow down an already computationally expensive process.

Does anyone have any thoughts on this?

Donna Hanlon



<<inline: tmap_hfavcon_young_left_3.0_variance_smoothed.jpg>>

<<inline: tmap_hfavcon_young_left_3.0_variance_unsmoothed.jpg>>

Reply via email to