On 04/14/2010 10:14 PM, QUINTINO MANO wrote:
> Hi caret users,
>
> I know that caret offers thresholding and cluster correction via  
> Attributes -> Metric -> Clustering and Smoothing. I'm planning on  
> using this function, either with 'minimum number of nodes' or  
> 'minimum surface area (mm2),' but I FIRST need to know what the  
> minimum cluster size should be. I know that AFNI offers a Monte Carlo  
> simulation (AlphaSim) that generates volumetric cluster sizes  
> corresponding to alpha levels, which is used for cluster correcting;  
> but, to date, I don't think there is an equivalent 'surface AlphaSim'  
> for surface-based functional maps.
>   
Ah, but you are wrong, Grasshopper. Actually, we can do better than 
AlphaSim, in my opinion. We have TFCE -- it picks the threshold, so you 
don't have to. You will need caret_stats, which means you will need the 
caret6 distribution:

http://brainmap.wustl.edu/pub/john/caret6_dist.zip
login pub
password download

Backing up, you'll want to review this whole page:

http://brainvis.wustl.edu/wiki/index.php/Caret:Documentation:Statistics

Pay particular attention to the caret_stats section.
> As of now, I have a multicolumn metric file that needs to be  
> thresholded and cluster corrected. I've already performed my group t- 
> tests on metric files, using an in-house script that outputs three  
> columns (beta coef, t-scores, p-values) for each contrast. I just  
> need to threshold and cluster correct, and my hands are officially up  
> in the air, need some direction.
>   
Since it is a single metric file, rather than two separate composites 
(e.g., one for each of two groups), I assume this is a one sample 
t-test. I don't have any one sample t-test scripts, but our depth paired 
t-test (left - right) is close:

http://brainmap.wustl.edu/pub/donna/SCRIPTS/SHAPE/paired.sh
login pub
password download

This sample script shows command lines and parameters we use. Out of 
that you'll get a TFCE enhanced t-map and a text report listing the 
resulting TFCE threshold and any supra-threshold clusters that were 
found. Note that TFCE doesn't give you clusters -- it gives you a 
threshold above which any nodes are significant. Caret is giving you the 
clusters to make your job easier. But significance is at the node level. 
I'd definitely feed it the t-scores, rather than p-values, since it's 
set up for values further from zero being more unlikely -- the opposite 
of p. Read the Smith and Nichols paper in the journal articles section 
of the above link. We are using E=1.0, because Tom Nichols told us to 
for 2D data, and that's good enough for me.

You can still do things the old cluster way, too. I've got some older 
scripts like that, if you prefer.

> I hope my ramblings make sense.
>
> Kind regards,
> Tino
> _______________________________________________
> caret-users mailing list
> [email protected]
> http://brainvis.wustl.edu/mailman/listinfo/caret-users
>   

_______________________________________________
caret-users mailing list
[email protected]
http://brainvis.wustl.edu/mailman/listinfo/caret-users

Reply via email to