One quick thought:
 
At least for functional data, you don't want to smooth in only two directions. 
Smoothing in 2 directions assumes that there is no correlation between nodes on 
each bank of a sulcus, which may not always be true. Consider 1 or even two 
voxels that cross the sulcus, such that the nodes on each bank fall within the 
same or adjacent voxels; those voxels are correlated to some degree; it is 
unreasonable to ignore the correlation between the nodes associated with those 
voxels -- hence a 2 dimensional smoothing filter is potentially inappropriate. 
Wider use of this type of smoothing, in my opinion, would have to be in 3 
dimensions. I don't believe that CARET has that capability yet, or at least 
last time I looked into the issue. Anatomical data may be a different story, 
but I am not that familiar with anatomical data and issues of correlation.
 
Best Regards, Donald McLaren
=====================
D.G. McLaren
Washington University in St. Louis - School of Medicine
Department of Anatomy and Neurobiology
Tel: (314) 362 3555
Tel: (773) 406 2464
Fax: (314) 747 4370
=====================
This e-mail contains CONFIDENTIAL INFORMATION which may also be LEGALLY 
PRIVILEGED and which is intended only for the use of the individual or entity 
named above. If the reader of the e-mail is not the intended recipient or the 
employee or agent responsible for delivering it to the intended recipient, you 
are hereby notified that you are in possession of confidential and privileged 
information. Any dissemination, distribution or copying of this e-mail is 
hereby strictly prohibited and may be unlawful. If you have received this 
e-mail unintentionally, please immediately notify the sender by e-mail or phone 
at (314) 362 3555.

________________________________

From: [EMAIL PROTECTED] on behalf of Donna Hanlon
Sent: Thu 12/22/2005 11:45 AM
To: caret-users
Subject: [caret-users] Neuroimaging Multiple Comparisons & 
ThresholdingDiscussion List



Hi caret-users,

I want to bring your attention to a new mailing list devoted to
neuroimaging multiple comparisons and thresholding issues (neuro-mult-comp):

http://brainvis.wustl.edu/mailman/listinfo/neuro-mult-comp

Rather than cross-post to multiple tool-centric lists (e.g., AFNI,
caret-users, Freesurfer, FSL, SPM), neuroimaging researchers can discuss
multiple comparisons and thresholding problems and issues on this
algorithm-centric list.

Here is an example of a question I'd like to ask on this list, but it
currently has only two members (Donald McLaren and I), but I know of
some caret-users who might have some input:

Russ Poldrack successfully pushed me into investigating variance
smoothing, which does look quite interesting.  Nichols & Holmes explain
this idea in the pseudo t-statistics section of their primer paper:

The "Primer Paper",
TE Nichols and APHolmes.
Nonparametric Permutation Tests for Functional Neuroimaging: A Primer with 
Examples.
Human Brain Mapping, 15:1-25, 2002.
http://www.fil.ion.ucl.ac.uk/spm/doc/papers/NicholsHolmes.pdf


I have tried it on some of my anatomical data, and it made a substantial 
difference (see attached captures).

So the questions are:

* How much to smooth?
* Which algorithm to use?

On page 19 of the primer paper, Nichols & Holmes say, "We used a variance 
smoothing of 4 mm FWHM, comparable to the original within subject smoothing. In 
our experience, the use of any variance smoothing is more important than the 
particular magnitude (FWHM) of the smoothing."

I'm inclined to agree, so I was happy to try something really nominal, using 
the Gaussian algorithm, but I realized I don't know how to map FWHM to our 
parameters, which are like the ones that define the ellipsoid in the gaussian 
mapping algorithm.  Since David was busy, I looked at Joern Diedrichsen's 
paper, to see what he used.  His Caret Surface Statistics paper 
(http://www.bme.jhu.edu/~jdiedric/download/Caret_surface_statistics.pdf) says
"In my experience this value is reached by smoothing in caret for 4 iterations 
with strength of 0.5."  This is the average neightbors algorithm, I think, so I 
try this, which gives the result in the captures.

Joern's functional data is nothing like my anatomical data, but I just wanted 
to see if it made any difference.  Now that I know it does, I want to be more 
principled about how much and how to smooth.

But there is a practical consideration with the algorithm, too.  We use a 
permutation strategy to determine significance (surface-based equivalent of the 
suprathreshold cluster test).  Applying a gaussian smoothing to each iteration 
t-map would slow down an already computationally expensive process.

Does anyone have any thoughts on this?

Donna Hanlon





<<winmail.dat>>

Reply via email to