Re: [caret-users] SumsDB website can't be reached

2016-06-27 Thread Donna Dierker
Hi Ting-Yu,

Hopefully, someone will be able to provide the reference and a link to the 
dataset for registering individual monkeys to the F99 atlas.  If not, I'll dig 
around tomorrow and try to find it.

But I did want to let you know we are aware of the problem with sumsdb.  Others 
have reported it and are waiting for datasets to become available.  
Unfortunately, there is no quick fix.  We are trying to figure out how to get 
it back up and running as quickly as possible.

Donna


On Jun 27, 2016, at 4:33 PM, Ting-Yu Chang  wrote:

> Hello all,
> 
> Recently, the website of SumsDB is unavailable so that we can’t find atlases 
> we are looking for. We need data files to register individual scans with the 
> standard atlas. Could you help us to fix the problem of website? or could you 
> please send me the Macaque F99 atlas datasets for registration with 
> individual data? We would also like to request the reference for landmarks.
> 
> Thanks, 
> 
> Ting-Yu
> 
> 
> 
> 
> ___
> caret-users mailing list
> caret-users@brainvis.wustl.edu
> http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] (no subject)

2016-03-24 Thread Donna Dierker
Hi Pavel,

Caret cannot segment newborn cortex, but it can flatten hemispheres segmented 
using other software (e.g., LIGASE - http://brainvis.wustl.edu/LIGASE).

I thought Fischl et al. were in the process of adding infant segmentation to 
Freesurfer.  Don't know where that stands, but well worth looking into.

Donna


On Mar 24, 2016, at 1:03 AM, paspri...@gmail.com wrote:

> Hello, Experts!
> 
> Is any possibility to flatten newborn cortex (inversed contrast on 3D T1) in 
> Caret? 
> If no, advise, pls, some toolkit for this.
> Thank you very much! 
> 
> Best,
> Pavel.
> ___
> caret-users mailing list
> caret-users@brainvis.wustl.edu
> http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Macaque Paxinos borders on flat map

2016-02-19 Thread Donna Dierker
Hi Julia,

No one has answered your question yet, because it doesn't have a simple answer. 
 It depends on what you are trying to do, and it might make more sense to work 
in workbench, rather caret, depending on what you are doing.

If you have a label.gii, for example, you can view that in either workbench or 
caret.  Workbench has this feature:

http://brainvis.wustl.edu/pipermail/caret-users/2016-February/006367.html

Then if you need something in caret, there is wb_command  
-border-file-export-to-caret5 to convert the border format to something caret 
can understand.

But this monkey atlas stuff is in a state of perpetual flux/improvement, hence 
the need for several questions.  David Van Essen is the best person to ask 
them, and if he is too busy to respond within a few days, ping the list again.

Donna


On Feb 12, 2016, at 4:29 PM, Julia Sliwa  wrote:

> Hi,
> 
> As a follow up I found that displaying the PHT00 borders on the flat map is 
> available from the Tutorial files by changing the BorderColor. However the 
> borders and color patches appear mismatched and the borders have numerous 
> delineations,
> 
>   
> 
> The Paxinos PHT00 borders from Macaque_Atlas_c11 look much nicer. 
> 
> 
> 
> Would there be a way of displaying those on a flat map?
> 
> Many thanks!
> Julia
> 
> 
> 
> -
> Julia Sliwa, Ph.D.
> 
> Laboratory of Neural Systems
> The Rockefeller University
> 1230 York Ave, New York, NY 10065
> 
> 
> On Fri, Feb 12, 2016 at 2:54 PM, Julia Sliwa  wrote:
> Hi Caret-users,
> 
> 
> I am using Caret to display fMRI activations on flat maps registered to the 
> F99 atlas. I make use a lot of the Lewis and Van essen flat maps with 
> borders, and would like now to display the flat maps with the Paxinos-PHT00 
> borders. Does anyone know how to display the borders of this atlas on flat 
> maps?
> 
> I can paint my fMRI activations along with PaxinosPHT00 borders on very 
> inflated brain using those files, but not flat brain:
> http://sumsdb.wustl.edu/sums/directory.do?id=8286148_name=MACAQUE_ATLAS_CC11
> 
> I can also display my fMRI activation and cover them with the Paxinos-PHT00 
> areas as color patches using the following tutorial, but cannot display the 
> corresponding Paxinos borders:
> http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=6595030_name=CARET_TUTORIAL_SEPT06.zip
> 
> Is there an easy way from here to display my activations on F99 flat with 
> Paxinos borders?
> 
> Thanks for your help
> Best
> Julia
> 
> 
> 
> 
> 
> 
> ___
> caret-users mailing list
> caret-users@brainvis.wustl.edu
> http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Sucessor to caret

2016-01-19 Thread Donna Dierker
Hi Dr. Aquino,

No, Connectome Workbench does not offer features like segmentation or 
registration (though it does have wb_command features for resampling and 
related functions, e.g., -surface-project-unproject).  Our lab typically uses 
software derived elsewhere for segmentation (e.g., Freesurfer) and registration 
(e.g., MSM).  But many people still use Caret for segmentation of primates and 
other species.

One thing I do is print out the full help for both command line utilities and 
grep it for relevant keywords.  I can usually quickly determine whether 
wb_command does what I need, or if I have to go back to caret_command for it 
(or the GUI).

Donna


On Jan 19, 2016, at 9:53 AM, Kevin Aquino  wrote:

> Hi all,
> 
> I was going through the email lists and I saw that connectome-wb is caret's 
> successor. Does this program offer the same tools that caret does? i.e. 
> segmentation, surface building etc?
> 
> 
> Cheers,
> 
> 
> Dr Kevin Aquino
> Research fellow,
> Sir Peter Mansfield Magnetic Resonance Center, The University of Nottingham. 
> 
> Honorary Research Fellow
> School of Physics, Faculty of Science, University of Sydney
> 
> E kevin.aqu...@nottingham.ac.uk, aqu...@physics.usyd.edu.au | W 
> www.physics.usyd.edu.au/~aquino/
> 
> --
> 
> The brain is a wonderful organ. It starts working the moment you get up and 
> does not stop until you get into the office.
> -
> Robert Frost
> 
> CRICOS 00026A
> This email plus any attachments to it are confidential. Any unauthorised use 
> is strictly prohibited. If you receive this email in error, please delete it 
> and any attachments.
> 
> Please think of our environment and only print this e-mail if necessary.
> 
> ___
> caret-users mailing list
> caret-users@brainvis.wustl.edu
> http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Generating cortical hull surface

2016-01-14 Thread Donna Dierker
Hi Rosita,

It can be so frustrating.  Sounds like you have some good alternatives to 
explore.

This link details one user's alignment issues and the solution:

http://brainmap.wustl.edu/pub/donna/IN/JOHN/align.html
login pub
password download


Donna


On Jan 13, 2016, at 10:09 PM, Rosita Shishegar 
<r.shishe...@student.unimelb.edu.au> wrote:

> Hi Donna,
> 
> Thanks for the explanation.
> 
> Unfortunately, as a result of preprocessing steps my surfaces are 
> transformed, resized, and they are not even aligned with the original volumes 
> anymore and I prefer not to bother with registration with original data.
> 
> The final goal was to compute sulcal depth. However, computing the 
> segmentation volume  from the surface would serve the purpose for me because 
> I have written the codes for the rest of the steps. As you suggested, I would 
> try to see what else may work for my sheep brains. 
> 
> Thanks again for all the help.
> 
> Cheers,
> Rosita
> 
> On Tue, Jan 12, 2016 at 3:41 AM, Donna Dierker <do...@brainvis.wustl.edu> 
> wrote:
> Hi Rosita,
> 
> None of the parameters in these commands makes sense for sheep:
> 
> > caret_command -volume-create 176 208 176 hull_vol.nii
> volume dimensions
> > caret_command -volume-set-origin hull_vol.nii hull_vol.nii -88.0 -123.0 
> > -75.0
> volume origin
> > caret_command -volume-set-spacing hull_vol.nii hull_vol.nii 1 1 1
> voxel dimensions
> 
> How did you get that surface?  Typically, you start off with a T1 or T2 
> volume and segment it using some software.  Then the binary segmentation 
> volume is tesselated into a surface.  You already had the surface, so did a 
> volume give rise to that?  If so, its dimensions, origin, and voxdims give 
> clues to what these should be.  In particular, if there is some volume you 
> want this surface to align with, that might be a good one to use for these 
> parameters.
> 
> > caret_command -surface-to-segmentation-volume sheep_90_rh.pial.surf.coord
> > sheep_90_rh.pial.surf.topo hull_vol.nii
> 
> The above command generates a ribbon of voxels along the surface, and I'm not 
> sure exactly how thick.  You can specify inner/outer thickness in the Caret 
> GUI (Surface: Region of Interest: Select all nodes: Create Volume Region of 
> Interest).  This is more or less what the above caret_command does, but it 
> uses a default inner/outer thickness that might be too thick for your sheep.
> 
> Again, not sure what the end game is here (e.g., do you want sulcal depth? 
> hull? registration?).  I fear there will be other showstopper steps down the 
> road that are more sheep-proof. ;-)
> 
> (Sorry, Tim, I'm having trouble thinking of bd puns here.)
> 
> Donna
> 
> 
> On Jan 10, 2016, at 9:31 PM, Rosita Shishegar 
> <r.shishe...@student.unimelb.edu.au> wrote:
> 
> > Hi Donna,
> >
> > Thanks a lot for the help.
> >
> > I ran the commands as you suggested and now I can compute the volume and 
> > the hull. My understanding is that the inputs that you chose for creating 
> > volume (-volume-create 176 208 176  ) and the origin (-volume-set-origin 
> > hull_vol.nii hull_vol.nii -88.0 -123.0 -75.0) were the reason I was getting 
> > error. Should I use these numbers for any data?
> >
> > As you mentioned, since my dataset includes sheep brains, computed cerebral 
> > hull may not be as convex as it should be.  However, if I can compute an 
> > accurate segmentation volume with Caret, I can compute the hull with more 
> > adjusted tuning parameters in Matlable.
> >
> > The problem is that the brains are smaller than human brain and and I think 
> > volume spacing 1 1 1 is quite big for them. I tried to use a smaller volume 
> > spacing (e.g. caret_command -volume-set-spacing hull_vol.nii hull_vol.nii 
> > 0.2 0.2 0.2) but it is giving me errors. Do you think there would be any 
> > way to use volume spacing smaller than 1 for this caret command.
> >
> > Cheers,
> > Rosita
> >
> >
> >
> > On Sat, Jan 9, 2016 at 10:16 AM, <do...@brainvis.wustl.edu> wrote:
> > Hi Rosita,
> >
> > This is a sheep brain, so I don't know that the hull Caret generates will
> > be suitable for your purposes, because this was designed for human brains.
> >
> > Still, I tried these steps after converting to caret coord/topo:
> >
> > caret_command -volume-create 176 208 176 hull_vol.nii
> > caret_command -volume-set-origin hull_vol.nii hull_vol.nii -88.0 -123.0 
> > -75.0
> > caret_command -volume-set-spacing hull_vol.nii hull_vol.nii 1 1 1
> > caret_command -surface-to-segmentation-volume sheep_90_rh.

Re: [caret-users] Generating cortical hull surface

2016-01-11 Thread Donna Dierker
Hi Rosita,

None of the parameters in these commands makes sense for sheep:

> caret_command -volume-create 176 208 176 hull_vol.nii
volume dimensions
> caret_command -volume-set-origin hull_vol.nii hull_vol.nii -88.0 -123.0 -75.0
volume origin
> caret_command -volume-set-spacing hull_vol.nii hull_vol.nii 1 1 1
voxel dimensions

How did you get that surface?  Typically, you start off with a T1 or T2 volume 
and segment it using some software.  Then the binary segmentation volume is 
tesselated into a surface.  You already had the surface, so did a volume give 
rise to that?  If so, its dimensions, origin, and voxdims give clues to what 
these should be.  In particular, if there is some volume you want this surface 
to align with, that might be a good one to use for these parameters.

> caret_command -surface-to-segmentation-volume sheep_90_rh.pial.surf.coord
> sheep_90_rh.pial.surf.topo hull_vol.nii

The above command generates a ribbon of voxels along the surface, and I'm not 
sure exactly how thick.  You can specify inner/outer thickness in the Caret GUI 
(Surface: Region of Interest: Select all nodes: Create Volume Region of 
Interest).  This is more or less what the above caret_command does, but it uses 
a default inner/outer thickness that might be too thick for your sheep.

Again, not sure what the end game is here (e.g., do you want sulcal depth? 
hull? registration?).  I fear there will be other showstopper steps down the 
road that are more sheep-proof. ;-)

(Sorry, Tim, I'm having trouble thinking of bd puns here.)

Donna


On Jan 10, 2016, at 9:31 PM, Rosita Shishegar 
<r.shishe...@student.unimelb.edu.au> wrote:

> Hi Donna,
> 
> Thanks a lot for the help.
> 
> I ran the commands as you suggested and now I can compute the volume and the 
> hull. My understanding is that the inputs that you chose for creating volume 
> (-volume-create 176 208 176  ) and the origin (-volume-set-origin 
> hull_vol.nii hull_vol.nii -88.0 -123.0 -75.0) were the reason I was getting 
> error. Should I use these numbers for any data?
> 
> As you mentioned, since my dataset includes sheep brains, computed cerebral 
> hull may not be as convex as it should be.  However, if I can compute an 
> accurate segmentation volume with Caret, I can compute the hull with more 
> adjusted tuning parameters in Matlable. 
> 
> The problem is that the brains are smaller than human brain and and I think 
> volume spacing 1 1 1 is quite big for them. I tried to use a smaller volume 
> spacing (e.g. caret_command -volume-set-spacing hull_vol.nii hull_vol.nii 0.2 
> 0.2 0.2) but it is giving me errors. Do you think there would be any way to 
> use volume spacing smaller than 1 for this caret command. 
> 
> Cheers,
> Rosita
> 
> 
> 
> On Sat, Jan 9, 2016 at 10:16 AM, <do...@brainvis.wustl.edu> wrote:
> Hi Rosita,
> 
> This is a sheep brain, so I don't know that the hull Caret generates will
> be suitable for your purposes, because this was designed for human brains.
> 
> Still, I tried these steps after converting to caret coord/topo:
> 
> caret_command -volume-create 176 208 176 hull_vol.nii
> caret_command -volume-set-origin hull_vol.nii hull_vol.nii -88.0 -123.0 -75.0
> caret_command -volume-set-spacing hull_vol.nii hull_vol.nii 1 1 1
> caret_command -surface-to-segmentation-volume sheep_90_rh.pial.surf.coord
> sheep_90_rh.pial.surf.topo hull_vol.nii
> caret_command -surface-identify-sulci Other.sheep.R.1002.spec right
> hull_vol.nii sheep_90_rh.pial.surf.topo sheep_90_rh.pial.surf.coord
> sheep_90_rh.pial.surf.coord NIFTI_GZIP
> 
> And I got a cerebral hull volume, but it probably isn't as convex as you'd
> like.  How are you using this?
> 
> Donna
> 
> 
> > Thanks for offering help Donna!
> >
> > I uploaded the surface *.stl and its converted version to *.pial that I
> > used as the input for Caret.
> >
> > The folder also include a text file containing commands that I used. For
> > some reason I can not run caret commands in my terminal so I use caret
> > command executor. So, I wrote the name of the used command and their
> > inputs
> > as well as the complete command.
> >
> > Thanks again!
> >
> > Cheers,
> > Rosita
> >
> >
> > On Wed, Jan 6, 2016 at 4:01 AM, Donna Dierker <do...@brainvis.wustl.edu>
> > wrote:
> >
> >> Upload a zip file containing the script (or just email commands used)
> >> and
> >> the inputs (surface or segmentation) here:
> >>
> >> http://brainvis.wustl.edu/cgi-bin/upload.cgi
> >>
> >> I'll have a look.
> >>
> >>
> >> On Jan 5, 2016, at 12:31 AM, Rosita Shishegar <
> >> r.shishe...@student.unimelb.edu.au> wr

Re: [caret-users] Generating cortical hull surface

2016-01-05 Thread Donna Dierker
Upload a zip file containing the script (or just email commands used) and the 
inputs (surface or segmentation) here:

http://brainvis.wustl.edu/cgi-bin/upload.cgi

I'll have a look.


On Jan 5, 2016, at 12:31 AM, Rosita Shishegar 
<r.shishe...@student.unimelb.edu.au> wrote:

> Hi Donna and all caret users,
> 
> Happy new year!
> 
> Before, I asked about computing sulcal depth from the cortical surface. Donna 
> suggested that I use gen_depth.sh file as a guide.
> 
> Using the commands, first I tried to create the volume from the surface which 
> later I needed for computing hull and the sulcal depth. Later, I ran caret 
> command SURFACE IDENTIFY SULCI using the previous step output but I get an 
> error saying cerebral hull VTK file has no points. 
> 
> I think I do not create segmentation volume properly because when visualizing 
> the volume (output_volume.nii file created using SURFACE TO SEGMENTATION 
> VOLUME), it includes just few voxels in a raw.
> 
> Any help would be greatly appreciated.
> 
> Thanks,
> Rosita
> 
> 
> On Thu, Sep 3, 2015 at 12:36 PM, Donna Dierker <donna.dier...@sbcglobal.net> 
> wrote:
> Hi Rosita,
> 
> This zip archive has a script named gen_depth.sh:
> 
> brainmap.wustl.edu/pub/donna/US/UCDAVIS/cwn.zip
> login pub
> password download
> 
> You'll need to adapt it to your data, but it will give you a head start.
> 
> Donna
> 
> 
> On Sep 2, 2015, at 7:01 PM, Rosita Shishegar 
> <r.shishe...@student.unimelb.edu.au> wrote:
> 
> > Hi,
> >
> >
> > I am a new Caret user and I am working on animal brain data, which are 
> > manually segmented and triangulated mesh of  cortical surface is already 
> > extracted and pre-processed.
> >
> >
> >
> > I want to measure Sulcal Depth of theses brains and so I need to generate 
> > Cerebral Hull surface of them. Is there any way to compute the Cerebral 
> > Hull using Caret from cortical surface rather than the volume?
> >
> >
> > Thanks,
> >
> > Rosita
> >
> >
> >
> >
> > ___
> > caret-users mailing list
> > caret-users@brainvis.wustl.edu
> > http://brainvis.wustl.edu/mailman/listinfo/caret-users
> 
> 
> ___
> caret-users mailing list
> caret-users@brainvis.wustl.edu
> http://brainvis.wustl.edu/mailman/listinfo/caret-users
> 
> 
> 
> -- 
> Rosita Shishegar 
> PhD Candidate
> Neuroimaging Group
> University of Melbourne
> Parkville, VIC 3010
> ___
> caret-users mailing list
> caret-users@brainvis.wustl.edu
> http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] LEFT HEM in FS-to-F99 Tutorial

2015-12-23 Thread Donna Dierker
Hi Julia,

I confess I don't know these scripts inside and out, but this image is 
enlightening:

http://brainvis.wustl.edu/pipermail/caret-users/attachments/20151222/180ba486/attachment-0001.png

It looks like there is a hole in the surface at the occipital pole -- almost 
like perhaps Caret cut out what it thought was the medial wall, but it was 
really occipital cortex.  But that is not the only issue with the surface.  
There are other holes.

Have you looked at the input surface in Freesurfer?  Are there holes in it?  Is 
it in a non-standard orientation (x iincreases from left to right, y increases 
from posterior to anterior, z increases from inferior to superior)?

Donna


On Dec 22, 2015, at 7:15 PM, Julia Sliwa  wrote:

> Dear caret users,
> 
> I am following the 'FS-to-F99 tutorial' and managed to process the right 
> hemisphere so that it appears like Figure 10, and Left hemisphere until 3.4.
> 
> I am now facing a problem at 3.6 where the superimposed individual and atlas 
> coordinates appear in completely different positions. The script outcome 
> seems ok and ends without error (see below).
> 
> If I next run Stage-3.FS-toF99.sh the atlas and deformed individual landmarks 
> appear completely distorted and I obtain the following outcome from the 
> script (see below).
> 
> Any help would be greatly appreciated
> 
> Thank you and have a nice day
> Julia
> 
> 
> 
> bash-3.2$ 
> '/Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/Stage-2B.FS-to-F99.sh' 
> + . Params.FS-to-F99.txt
> ++ set -x
> ++ TUTORIAL=no
> ++ CASE=120810MILO
> ++ HEMISPHERE=left
> ++ VOXDIM=0.5
> ++ ATLAS_DIR=/Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET
> ++ CARET_DIR=/Freiwald/lab_files/opt/caret/
> ++ SPECIES=Macaque
> ++ NUMBER_NODES=78317
> + cp Params.FS-to-F99.txt 
> /Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET/
> + export SUBJECTS_DIR=/Freiwald/jsliwa/cooked/anatomicals/
> + SUBJECTS_DIR=/Freiwald/jsliwa/cooked/anatomicals/
> ++ cut -c1
> ++ echo left
> ++ tr '[:lower:]' '[:upper:]'
> + HEM_FLAG=L
> ++ echo left
> ++ cut -c1
> + FS_HEM_PREFIX=lh
> + FS_HEM_PREFIX=/Freiwald/jsliwa/cooked/anatomicals//120810MILO/surf/lh
> + '[' L = L ']'
> + MATRIX='-1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0'
> + caret_command -surface-apply-transformation-matrix 
> /Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET/Macaque.ATLAS.sphere.6.74k.coord
>  
> /Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET/Macaque.sphere_6.RIGHT_HEM.74k.topo
>  
> /Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET/Macaque.sphere_6.RIGHT_HEM.X-Flip.74k.coord
>  -matrix -1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0
> + caret_command -surface-border-unprojection 
> /Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET/Macaque.sphere_6.RIGHT_HEM.X-Flip.74k.coord
>  
> /Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET/Macaque.sphere_6.RIGHT_HEM.74k.topo
>  
> /Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET/Macaque.F99.R.LANDMARKS.Reg-with-120810MILO.L.74k_f99.borderproj
>  Macaque.F99.LANDMARKS-for-120810MILO.L.SPHERE.X-Flip.border
> + caret_command -surface-border-projection 
> 120810MILO.L.SPHERICAL_STD.78317.coord 120810MILO.L.CLOSED.78317.topo 
> Macaque.F99.LANDMARKS-for-120810MILO.L.SPHERE.X-Flip.border 
> Macaque.F99.LANDMARKS-for-120810MILO.L.78317.borderproj
> + '[' no = yes ']'
> + sed /SPHERICALcoord_file/d 120810MILO.L.Stage-2.spec
> + mv 120810MILO.L.Stage-2.spec.rev 120810MILO.L.Stage-2.spec
> + echo SPHERICALcoord_file 120810MILO.L.SPHERICAL_STD.78317.coord
> + echo borderproj_file Macaque.F99.LANDMARKS-for-120810MILO.L.78317.borderproj
> + caret_command -scene-create 120810MILO.L.Stage-2.spec 
> 120810MILO.L.Stage-2.scene 120810MILO.L.Stage-2.scene '3. Compare 
> 120810MILO.L with F99 landmarks' -surface-overlay UNDERLAY SURFACE_SHAPE 
> 'Folding (120810MILO.L) Smooth_MW' -1 -show-borders -window-surface-types 
> WINDOW_MAIN 520 500 SPHERICAL CLOSED LATERAL -window-surface-types WINDOW_2 
> 520 500 SPHERICAL CLOSED MEDIAL
> 
> 
> 
> 
> + . Params.FS-to-F99.txt
> ++ set -x
> ++ TUTORIAL=no
> ++ CASE=120810MILO
> ++ HEMISPHERE=left
> ++ VOXDIM=0.5
> ++ ATLAS_DIR=/Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET
> ++ CARET_DIR=/Freiwald/lab_files/opt/caret/
> ++ SPECIES=Macaque
> ++ NUMBER_NODES=78317
> + cp Params.FS-to-F99.txt 
> /Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99/F99_TARGET/
> + export SUBJECTS_DIR=/Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99
> + SUBJECTS_DIR=/Freiwald/jsliwa/cooked/anatomicals/FS_to_caret_F99
> + cat
> ++ echo left
> ++ cut -c1
> ++ tr '[:lower:]' '[:upper:]'
> + HEM_FLAG=L
> ++ echo left
> ++ cut -c1
> + FS_HEM_PREFIX=lh
> + FS_HEM_PREFIX=/Freiwald/jsliwa/cooked/anatomicals/120810MILO/surf/lh
> + INCLUDE_FUNCTIONAL_MAP=no
> + sed '
> s#SPECIES#Macaque#g
> s#HEMISPHERE#left#g
> s#CASE#120810MILO#g
> s#space 

Re: [caret-users] medial wall override via caret_command

2015-08-20 Thread Donna Dierker
It depends on how critical it is the $metric_filename have zeroes in the medial 
wall, rather than just look gray when you view/capture it.  For display/capture 
purposes, you can use the existing PALS paint file:

Human.PALS_B12.BOTH.COMPOSITE_Areas_Functional_WS.73730.paint

… which has these columns:

AVERAGE-MED-WALL B1-12 RIGHT
AVERAGE-MED-WALL B1-12 LEFT

Then adjust your D/C: Surface Overlay/Underlay settings as shown here:

http://brainmap.wustl.edu/pub/donna/US/CT/YALE/medial_wall_gray.png
login pub
password download

That paint file is included in this archive:

CARET_TUTORIAL_SEPT06.zip
http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=6595030

If you are doing analysis with $metric_filename that needs the medial wall to 
be zeroed out, you can create an inverted mask like this:

caret_command -surface-region-of-interest-selection my.coord my.topo  
cortical_mask_outside_medial_wall.roi -paint  medial.wall.paint column-number  
MEDIAL.WALL NORMAL -invert-selection

But then what you do next depends on your analysis, and I suspect you care more 
about the display.


On Aug 20, 2015, at 8:27 AM, Yang, Daniel daniel.yj.y...@yale.edu wrote:

 Dear Donna,
 
 Thanks! I am trying to map the functional volume in MNI152 (SPM99) space
 to the surface space (PALS) as metric. It is like the following:
 
 caret_command -volume-map-to-surface-pals  \
  \
 $metric_filename \
 SPM99 \
 $hemisphere \
 METRIC_AVERAGE_NODES \
 $fMRI_vol_filename \
 -metric-afm
 
 
 Could you please provide an example to use “select” in caret_command to
 select the medial wall paint column as an overlay on top of the functional
 overlay?
 
 Many thanks!!!
 Daniel
 
 On 8/19/15, 6:38 PM, caret-users-boun...@brainvis.wustl.edu on behalf of
 Donna Dierker caret-users-boun...@brainvis.wustl.edu on behalf of
 do...@brainvis.wustl.edu wrote:
 
 Could you elaborate on how you are using caret_command?
 
 The Enable Medial Wall Override affects display, which is not applicable
 using caret_command, unless you are using -show-scenes, in which case you
 would select the medial wall paint column as an overlay on top of your
 functional overlay, in order to gray out the medial wall.  It gets more
 complicated if you want to exclude the medial wall from processing, but
 isn't too hard (search for select in the caret_command full help
 output).
 
 
 On Aug 19, 2015, at 3:13 PM, Yang, Daniel daniel.yj.y...@yale.edu
 wrote:
 
 Dear Caret Experts,
 
 In Paint Main, I can check “Enable Medial Wall Override” to gray out
 medial wall. Is it possible to do so via caret_command?
 
 Many thanks!
 Daniel
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 
 https://urldefense.proofpoint.com/v2/url?u=http-3A__brainvis.wustl.edu_ma
 ilman_listinfo_caret-2Dusersd=AwIF-gc=-dg2m7zWuuDZ0MUcV7Sdqwr=vhD8z919
 MORXy6GkKdTAw3V58rxzUZGOKpGXPDgqUHYm=h5-YV_ETDHGYBcqGmBeefwlj-rjDZnlPAYp
 WTcIsjmIs=K1vKNA8KK-hNQ2Za8ncWGM47HbBDDXZwfOaGXikwyZYe=
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 https://urldefense.proofpoint.com/v2/url?u=http-3A__brainvis.wustl.edu_mai
 lman_listinfo_caret-2Dusersd=AwIF-gc=-dg2m7zWuuDZ0MUcV7Sdqwr=vhD8z919MO
 RXy6GkKdTAw3V58rxzUZGOKpGXPDgqUHYm=h5-YV_ETDHGYBcqGmBeefwlj-rjDZnlPAYpWTc
 IsjmIs=K1vKNA8KK-hNQ2Za8ncWGM47HbBDDXZwfOaGXikwyZYe= 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] doubt regarding fiducial mapping

2015-08-19 Thread Donna Dierker
If you can find your *.params file, then don't worry about the find command, 
which was intended to help you locate it if you did not know its name/location 
already.

I looked at your dataset, and you must do two things to help me help you:

* Remove the spaces from the filenames.  Replace them with _ characters.  When 
I try to read, move, or otherwise manipulate these files, the spaces are 
misinterpreted by the Linux shell as separate arguments.

* Add the *.params file.

After you've made those changes, rename the directory john_renamed.  Then zip 
it and upload it.  I'll do my best to solve it.


On Aug 19, 2015, at 1:23 AM, j...@nbrc.ac.in wrote:

 Hello Donna,
 
 1.I ve tried taking the XYZ min values from .PARAMS file and transformed
 the overlay. This appears very subjective and error prone.
 
 2. Do this in your subject directory:
 
 find /my/subject/dir -name *.params |sort | xargs grep -i min
 
 
 I am not sure i understood this step properly
 
 
 I am unable to coregister the functional image and anatomical image properly.
 I am sorry to trouble you again , but it would be great if you can take a
 look at the dataset again, which i have uploaded in a folder  data from
 john.zip at http://pulvinar.wustl.edu/cgi-bin/upload.cgi
 
 I doubt now that there is some issue within the procedure that we follow
 in doing the analysis. So it would be best if you can check/reanalyse the
 dataset from very initial step itself.
 
 PS:
 -anatomical image.hdr\img---unaltered structural T1 image.
 -functional.hdr\imgbasic SPM8  T2*image which is to be mapped
 
 Thank you.
 
 
 
 
 On Aug 18, 2015, at 2:19 AM, j...@nbrc.ac.in wrote:
 
 Hello Donna,
 Thank you for your reply.
 Two doubts i have
 1. why even after loading metric as primary overlay it is not getting
 'selectable' here in functional view (see attachment capture)?
 
 The metric is a vertex-intensity mapping.  It is not the volume.  You can
 load the volume that was mapped using File: Open Data File: Volume
 Functional files.  Then it will be selectable when you map to loaded
 volume.  Or you can simply map to file on disk without loading.  But it is
 not a bad idea to load the volume, too, to make sure everything aligns
 properly:  Functional with surface is the important one, but the
 anatomical volume is the link between functional and surface (i.e., how
 they get aligned).
 
 2. what is the meaning of this error message (attachment 2), which
 appears
 on selecting the functional volumes?
 
 Again, the funky file naming of two of the volume files (e.g., space,
 parentheses, leading dashes) impedes my ability to check them quickly.
 But the whole brain anatomical does appear to be a NIfTI volume, rather
 than just an Analyze .hdr file.  I loaded it as a functional volume, and
 then tried to map it to your surface.  I got the same result as trying to
 map it from disk (clicking OK on the stickup you captured).
 
 That warning never got removed after support for nifti .hdr/.img pairs was
 added, but based on my getting the same results using the two paths
 mentioned above, I think you will solve your problem when you solve the
 misalignment between your cropped volume and the whole brain anatomical
 volume.  Alternatively, shift the surface to meet the whole brain /
 functional volume.
 
 Ideally, get the following loaded and aligned in caret:
 
 * whole brain anatomy volume
 * functional volume overlay
 * surface (probably shifted version of what you have now)
 
 Do this in your subject directory:
 
 find /my/subject/dir -name *.params |sort | xargs grep -i min
 
 Capture it to a file if it's a lot.  One of those files has the offset you
 need.
 
 thank you.
 
 
 Hi John,
 
 Got your upload.  While I couldn't open the cropped volume in caret due
 to
 the way it was named, I was able to view the surface contour over the
 uncropped volume.  See attached capture, which shows an offset.
 
 If you have a SureFit/Caret .params file (not included in the zip), it
 might contain the [XYZ]min values from the cropping, which might be
 used
 to either adjust the functional volume's origin, or more likely apply
 an
 affine transform to the surface, to shift it back into alignment with
 the
 whole brain volume.  You don't have to blow away your existing coord;
 just
 rename the shifted version to indicate the offset.  This can be done
 via
 command line or using the Caret: Window: Transformation Matrix editor.
 The polarity of the shift (+ or -) depends on whether you're shifting
 the
 volume or surface, and I always get confused about it.  Usually I try
 it
 one way; look at the result like the capture below; and if it looks
 worse,
 I try it the other way. ;-)  One of the ways usually does the trick.
 
 Donna
 
 
 
 On Aug 12, 2015, at 9:14 AM, Donna Dierker do...@brainvis.wustl.edu
 wrote:
 
 Could you upload your dataset in a zip archive here:
 
 
 http://pulvinar.wustl.edu/cgi-bin/upload.cgi
 
 Specifically I need:
 
 * functional volume being mapped

Re: [caret-users] doubt regarding fiducial mapping

2015-08-11 Thread Donna Dierker
What software was used to reconstruct the surface?

With freesurfer, there is an offset between the orig.mgz and the surface.  And 
depending on many factors, you might have to flip/rotate the surface to be in 
the same orientation as the volume (or bring the volume to the surface).

See this thread:

http://www.mail-archive.com/caret-users%40brainvis.wustl.edu/msg02081.html

Also see the Check Alignment between Normalized Volume and Surface section 
here:

http://brainvis.wustl.edu/help/pals_volume_normalization/spm5_normalization_pals.html

Examining the surface contour overlaid on the volum in volume view All is often 
very enlightening.


On Aug 11, 2015, at 4:42 AM, j...@nbrc.ac.in wrote:

 
 yes, ours is an individual s surface reconstruction, and so we checked the
 registration in spm8( using  anatomical image used for reconstruction and
 the functional image volumes used for mapping), where the volumes are
 coregistered properly, but shows anomaly in caret.
 
 thank you
 
 
 
 This almost always is because the functional volume is not
 stereotactically registered to the anatomical volume used to generate the
 fiducial surface. Is this an atlas surface (e.g., one of the PALS mean
 midthickness surfaces), or is it an individual's surface reconstruction?
 If atlas, this could happen if you were trying to map SPM functional data
 to the AFNI mid thickness surface, for example, because there are
 noticeable differences between those stereotaxic spaces.
 
 If individual, make sure the functional volume is in register with the
 anatomical volume used to generate the surface.
 
 
 On Aug 10, 2015, at 1:40 AM, j...@nbrc.ac.in wrote:
 
 Hi,
  when i map the functional data onto caret fiducial surface, it appears
 that the mapping is shifted in rostrocaudal axis, caudally, by about 2
 -3 mm. anyone has idea what could be possibly wrong here?
 thanks,
 john
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] statistical report on areal enlargement

2015-06-29 Thread Donna Dierker
Hi Leonardo,

It is possible that one more skilled than I could get Statistical Paint Report 
to do what you want with less trouble.  But for me, scripting it via the 
command line would be more straightforward.  Have a look at two scripts here:

http://brainmap.wustl.edu/pub/donna/WUSTL/BURTON/SCRIPTS/anova_two_way.sh
http://brainmap.wustl.edu/pub/donna/WUSTL/BURTON/SCRIPTS/metric_region_mean.sh
login pub
password download

The anova_two_way.sh one has this bit:

  caret_command -surface-region-of-interest-selection $COORD $TOPO $ROI $ROI 
-paint $PAINT 1 $PAINT_NAME ANDNOT

And the metric_region_mean.sh has lines that might be handy.

See if you can frankenstein them together to do what you need, because I 
suspect you will get tired of selecting each paint name in the Surface: Region 
of Interest: select nodes with paint dialog.

Donna


On Jun 29, 2015, at 10:28 AM, Leonardo Cerliani leonardo.cerli...@gmail.com 
wrote:

 dear Donna,
 
 I am trying to do something apparently simple, but cannot manage, so I would 
 like to ask your help.
 
 I have loaded the areal expansion overlay onto the caret (contained in the 
 CARET_TUTORIAL_SEPT06/COMPARE_MACAQUE_HUMAN directory) and I would like to 
 have an estimate of the areal expansion for each area in the (registered) 
 LVE00 paint. Basically, an average areal expansion value over each LVE00 
 region. I tried selecting all the nodes in the paint, and to generate a 
 statistical report, but I get only the average areal expansion value over all 
 the cortex. Is it really not possible to do that?
 
 thank you in advance for your help,
 leonardo 
 
 -- 
 Leonardo Cerliani, PhD
 
 Institut du Cerveau et de la Moelle épinière (ICM) 
 47 Boulevard de l'Hôpital, UMRS 975 
 Paris, France
   ___
  {o,o}
  |)__)
 -
 http://openlibrary.org/people/leonardocerliani/lists
 
 A brain disconnected from the heart is an airplane without wings
 lc
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] New Paid Fellowships at the Mozilla Science Lab

2015-06-24 Thread Donna Dierker

 From: Bill Mills b...@mozillafoundation.org
 Date: Tuesday, June 23, 2015 at 4:17 PM
 To: Software Carpentry Discussion disc...@lists.software-carpentry.org
 Subject: [Discuss] New Paid Fellowships at the Mozilla Science Lab
 
 Hi folks,
 
 We're very excited to announce that applications are now open for three paid 
 fellowship positions at the Mozilla Science Lab. The Fellows will have the 
 opportunity to pursue projects that advance and support open science at their 
 home institutions, with support and mentorship from the Science Lab and our 
 broader community. Details can be found:
 
 On the website: https://www.mozillascience.org/fellows
 and on the blog: 
 https://www.mozillascience.org/advancing-open-data-practice-within-institutional-walls
 
 We're very keen to hear how you want to champion openness in science and 
 research - applications close August 14, so please apply today!
 
 -- 
 Best Regards,
 Bill Mills
 Community Manager
 Mozilla Science Lab

___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] negative values displayed positive

2015-06-22 Thread Donna Dierker
Hi FR,

This is rather an odd file.  I did 3dhistog on it and got this output:

http://brainmap.wustl.edu/pub/donna/SPAIN/BCBL/your_file.3dhistog.txt
login pub
password download

Mostly zeroes, but 14184 voxels set to 32766 (and no voxels set to any values 
in between 0 and 32766).

And when I map this to the PALS atlas, I get nothing on most vertices, but 
solid yellow over a perisylvian blob.  And when I click in the yellow, sure 
enough the ID window confirms the vertex is set to 32766 intensity.

Working as expected, but data a bit off the beaten path.

Donna


On Jun 21, 2015, at 4:55 PM, Frédéric Roux f.r...@bcbl.eu wrote:

 Hi Dona,
 
 thanks for getting back on this.
 I am using NIFTI, so nothing rare from that side.
 
 Just uploaded the file via the link you sent me.
 
 Thanks for looking into this.
 
 --
 FR
 
 - Original Message -
 From: Donna Dierker do...@brainvis.wustl.edu
 To: SureFit Caret, and SuMS software users caret-users@brainvis.wustl.edu
 Sent: Sunday, June 21, 2015 6:37:50 PM
 Subject: Re: [caret-users] negative values displayed positive
 
 I'm not sure what would do this.  What format is your volume?  NIFTI?  Older 
 .hdr/.img pairs sometimes got x-flipped upon opening under some 
 circumstances, but nothing I know of (except explicit *(-1) using volume 
 math) does this.
 
 You can use something AFNI 3dinfo or 3dhistog to double-check your volume's 
 values, or you can upload your volume here and I can do so tomorrow:
 
 http://brainvis.wustl.edu/cgi-bin/upload.cgi
 
 You can also have a look at the palette format, and just make sure it's not 
 the palette that is making your values appear to be flipped:
 
 http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/file_formats/file_formats.html#paletteFile
 
 Caret has some built-in palettes (see D/C: Metric Settings menu).
 
 
 On Jun 21, 2015, at 3:55 AM, Frédéric Roux f.r...@bcbl.eu wrote:
 
 Dear all,
 
 I'm trying to map a functional volume to the PLAS-B12
 surface but the sign of the values gets flipped so that
 negative values appear as positive ones.
 
 Does anyone know what could be wrong?
 
 Many thanks.
 
 Fred
 
 --
 FR
 
 - Original Message -
 From: Matt Glasser m...@ma-tea.com
 To: SureFit Caret, and SuMS software users caret-users@brainvis.wustl.edu
 Sent: Saturday, June 20, 2015 11:37:53 PM
 Subject: Re: [caret-users] Missing parts in MyelinMap+Different number of 
 nodes!
 
 Yes please use the HCP pipelines for myelin mapping.
 
 Peace,
 
 Matt.
 
 On 6/19/15, 4:17 AM, Donna Dierker do...@brainvis.wustl.edu wrote:
 
 Hi Xara,
 
 Because you are in the enviable position of having high res T1 AND T2,
 you may be able to use the Human Connectome Project pipelines on your
 data, rather than this (older generation) caret-based myelin mapping
 script:
 
 http://www.humanconnectome.org/documentation/HCP-pipelines/
 
 This also has the advantage of writing output files in workbench formats,
 rather than older caret formats.  I have a hunch that Matt Glasser will
 point you in that direction (he is probably at OHBM conference or on his
 way home).
 
 I confess I am not as familiar with this script as Matt is, but from what
 I read at the link you sent, it appears the maps are on the native mesh,
 rather than 164k.  And I don't see myelin represented in the list of
 freesurfer_to_fs_LR output files here:
 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/Freesurfer_to_fs
 _LR/Output
 
 (These were independent efforts going on at roughly the same time.)
 
 You may already have considered the HCP pipeline route, but if not, it is
 worth a look -- you've got the T2's at the right res, which is often a
 hitch.  I suspect myelin maps on 164k won't be your only benefit.
 
 Donna
 
 
 On Jun 19, 2015, at 8:58 AM, Shams, Z. z.sh...@donders.ru.nl wrote:
 
 Dear users,
 
 
 
 I just started working with Caret for creating myelin maps. I used HCP
 imaging protocols for data acquisition, both T1 and T2 data have the
 voxel size of 0.7mm isotropic.
 
 I followed the instructions in
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/MyelinMapping.
 
 Actually I managed to display the four outputs, but some parts of the
 maps are missing when overlaying on a surface(they¹re not complete maps:
 either raw or corrected myelin maps).
 
 Another problem is that I used the freesurfer to fs LR script to
 convert all my freesurfer files to the 164k_fs_LR mesh,
 
 but when I try  to show the myelin maps on e.g. very-inflated 164k fs
 LR, It fails with the error of containing a different number of nodes
 thanŠ .
 
 
 
 I¹d appreciate any help and instructions.
 
 
 
 Thanks,
 
 
 Xara
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users

Re: [caret-users] negative values displayed positive

2015-06-21 Thread Donna Dierker
I'm not sure what would do this.  What format is your volume?  NIFTI?  Older 
.hdr/.img pairs sometimes got x-flipped upon opening under some circumstances, 
but nothing I know of (except explicit *(-1) using volume math) does this.

You can use something AFNI 3dinfo or 3dhistog to double-check your volume's 
values, or you can upload your volume here and I can do so tomorrow:

http://brainvis.wustl.edu/cgi-bin/upload.cgi

You can also have a look at the palette format, and just make sure it's not the 
palette that is making your values appear to be flipped:

http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/file_formats/file_formats.html#paletteFile

Caret has some built-in palettes (see D/C: Metric Settings menu).


On Jun 21, 2015, at 3:55 AM, Frédéric Roux f.r...@bcbl.eu wrote:

 Dear all,
 
 I'm trying to map a functional volume to the PLAS-B12
 surface but the sign of the values gets flipped so that
 negative values appear as positive ones.
 
 Does anyone know what could be wrong?
 
 Many thanks.
 
 Fred
 
 --
 FR
 
 - Original Message -
 From: Matt Glasser m...@ma-tea.com
 To: SureFit Caret, and SuMS software users caret-users@brainvis.wustl.edu
 Sent: Saturday, June 20, 2015 11:37:53 PM
 Subject: Re: [caret-users] Missing parts in MyelinMap+Different number of 
 nodes!
 
 Yes please use the HCP pipelines for myelin mapping.
 
 Peace,
 
 Matt.
 
 On 6/19/15, 4:17 AM, Donna Dierker do...@brainvis.wustl.edu wrote:
 
 Hi Xara,
 
 Because you are in the enviable position of having high res T1 AND T2,
 you may be able to use the Human Connectome Project pipelines on your
 data, rather than this (older generation) caret-based myelin mapping
 script:
 
 http://www.humanconnectome.org/documentation/HCP-pipelines/
 
 This also has the advantage of writing output files in workbench formats,
 rather than older caret formats.  I have a hunch that Matt Glasser will
 point you in that direction (he is probably at OHBM conference or on his
 way home).
 
 I confess I am not as familiar with this script as Matt is, but from what
 I read at the link you sent, it appears the maps are on the native mesh,
 rather than 164k.  And I don't see myelin represented in the list of
 freesurfer_to_fs_LR output files here:
 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/Freesurfer_to_fs
 _LR/Output
 
 (These were independent efforts going on at roughly the same time.)
 
 You may already have considered the HCP pipeline route, but if not, it is
 worth a look -- you've got the T2's at the right res, which is often a
 hitch.  I suspect myelin maps on 164k won't be your only benefit.
 
 Donna
 
 
 On Jun 19, 2015, at 8:58 AM, Shams, Z. z.sh...@donders.ru.nl wrote:
 
 Dear users,
 
 
 
 I just started working with Caret for creating myelin maps. I used HCP
 imaging protocols for data acquisition, both T1 and T2 data have the
 voxel size of 0.7mm isotropic.
 
 I followed the instructions in
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/MyelinMapping.
 
 Actually I managed to display the four outputs, but some parts of the
 maps are missing when overlaying on a surface(they¹re not complete maps:
 either raw or corrected myelin maps).
 
 Another problem is that I used the freesurfer to fs LR script to
 convert all my freesurfer files to the 164k_fs_LR mesh,
 
 but when I try  to show the myelin maps on e.g. very-inflated 164k fs
 LR, It fails with the error of containing a different number of nodes
 thanŠ .
 
 
 
 I¹d appreciate any help and instructions.
 
 
 
 Thanks,
 
 
 Xara
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Missing parts in MyelinMap+Different number of nodes!

2015-06-19 Thread Donna Dierker
Hi Xara,

Because you are in the enviable position of having high res T1 AND T2, you may 
be able to use the Human Connectome Project pipelines on your data, rather than 
this (older generation) caret-based myelin mapping script:

http://www.humanconnectome.org/documentation/HCP-pipelines/

This also has the advantage of writing output files in workbench formats, 
rather than older caret formats.  I have a hunch that Matt Glasser will point 
you in that direction (he is probably at OHBM conference or on his way home).

I confess I am not as familiar with this script as Matt is, but from what I 
read at the link you sent, it appears the maps are on the native mesh, rather 
than 164k.  And I don't see myelin represented in the list of 
freesurfer_to_fs_LR output files here:

http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/Freesurfer_to_fs_LR/Output

(These were independent efforts going on at roughly the same time.)

You may already have considered the HCP pipeline route, but if not, it is worth 
a look -- you've got the T2's at the right res, which is often a hitch.  I 
suspect myelin maps on 164k won't be your only benefit.

Donna


On Jun 19, 2015, at 8:58 AM, Shams, Z. z.sh...@donders.ru.nl wrote:

 Dear users,
 
  
 
 I just started working with Caret for creating myelin maps. I used HCP 
 imaging protocols for data acquisition, both T1 and T2 data have the voxel 
 size of 0.7mm isotropic.
 
 I followed the instructions in 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/MyelinMapping.
 
 Actually I managed to display the four outputs, but some parts of the maps 
 are missing when overlaying on a surface(they’re not complete maps: either 
 raw or corrected myelin maps).
 
 Another problem is that I used the freesurfer to fs LR script to convert all 
 my freesurfer files to the 164k_fs_LR mesh,
 
 but when I try  to show the myelin maps on e.g. very-inflated 164k fs LR, It 
 fails with the error of containing a different number of nodes than… .
 
  
 
 I’d appreciate any help and instructions.
 
  
 
 Thanks,
 
 
 Xara
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Colours and scaling

2015-06-11 Thread Donna Dierker
Hi Pavel,

Map your volume as a Paint ROI volume, rather than as a metric.  Then use 
Attributes: Area Color to assign colors.  Save not only the resulting paint 
file, but also the area color file.  The paint file has the index to name 
mapping, while the area color maps name to color.

If you're not mapping volume to surface, but just want to change how the volume 
is displayed, then make sure you open the volume as a volume paint file and 
then try Attributes: Area Color.  See if there are any unnamed colors.

If you save your volume in wustl nil/ifh format, the ifh file can encode the 
index-name mapping; see 
http://brainvis.wustl.edu/pipermail/caret-users/2014-August/006199.html for 
gory details.

Donna


On Jun 10, 2015, at 11:25 PM, paspri...@gmail.com wrote:

 Dear Experts,
 please, help!
 
 Is it possible in Caret to color the map in accordance with my own ranges of 
 nodes values?
 
 I have SPM volume with 0s, 1s and 2s only. I want 0s to be uncolored, 1s - 
 red, and 2s - blue. 
 
 Thank you,
 Pavel.
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] labelling of exported paint volumes

2015-05-22 Thread Donna Dierker
Hi Leonardo,

Given the nature of the data you are mapping, I would expect fragmentation.

This thread might help:

http://brainvis.wustl.edu/pipermail/caret-users/2014-August/006199.html

Hcp-users might yield more feedback on other strategies for accomplishing your 
goals.

Cheers,

Donna


On May 22, 2015, at 1:55 AM, Leonardo Cerliani leonardo.cerli...@gmail.com 
wrote:

 dear people,
 
 I would like to ask your help for a simple procedure that I couldn't figure 
 out from the tutorials.
 
 I have some data (tractography samples) that I registered to the F99 macaque 
 brain. I would like to use the provided atlases (Brodmann, Paxinos, 
 Bonin-Bailey and so on) to quantify the amount of samples in each atlas 
 region. I tried using the Surface-based and Volume-based Region of Interest 
 analysis but with no success. Instead the results appear to be fragmented for 
 each detected cluster.
 I then resorted to exporting the atlases in a volume, to do the 
 quantification myself in matlab, however I cannot figure out the relationship 
 between the value assigned to each region in the atlas and the name of that 
 region in the atlas. 
 I would greatly appreciate your help on this.
 
 thank you very much!
 all the best,
 
 leonardo 
 
 -- 
 Leonardo Cerliani, PhD
 
 Institut du Cerveau et de la Moelle épinière (ICM) 
 47 Boulevard de l'Hôpital, UMRS 975 
 Paris, France
   ___
  {o,o}
  |)__)
 -
 http://openlibrary.org/people/leonardocerliani/lists
 
 A brain disconnected from the heart is an airplane without wings
 lc
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] change color metric overlap

2015-02-06 Thread Donna Dierker
Using either Surface: Region of interest or caret_command 
(-surface-region-of-interest-selection and -paint-assign-to-nodes), you can 
threshold your metrics at some value and assign a paint/llabel/ROI to them.  
Your paint name might be visual or auditory, but it gets mapped to an 
integer when the vertices assignments are stored in the paint file.

Then you can use Attributes: Area Color to map paint names to colors.

(I tried using Attributes: Metric: Convert metric to RGB, but I couldn't get 
yellow out of the green and blue channels.  Lovely aqua shades, though.)


On Feb 6, 2015, at 4:31 AM, Johannes Heereman johannes.heere...@fu-berlin.de 
wrote:

 
 Hi,
 
 I want to display 2 paramtric effects and their overlap/conjunction in a
 third color on a surface. I'm not happy with the easy solution
 (blue-red-purple). I wonder if there's a way to change it to
 blue-yellow-green? would be great!
 Thanks,
 Johannes
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] problem with mapping volume to surface

2015-01-29 Thread Donna Dierker
First, make sure the computer you're using doesn't have a non-English character 
set installed, because that often interferes with the Qt file I/O.

Second, make sure the spec file you're selecting has write permission.  If 
necessary, copy it to a different filename and make sure it is writeable.

See if the metric file is actually there, even if it didn't get added to the 
spec file.  If the metric file exists, try loading it using File: Open Data 
File: Metric File.


On Jan 28, 2015, at 5:35 PM, Frédéric Roux f.r...@bcbl.eu wrote:

 
 Dear all,
 
 a while ago I managed to map my functional data with caret
 onto the PALS-B12 template. Unfortunately, I cannot reproduce
 this anymore so I am hoping that somebody may be able to guide
 me to figure out what goes wrong.
 
 The steps that I take are:
 
 1) Attributes - Map volume(s) to surface. Here I select a .nii file which 
 contains
 a functional volume and the PALS-B12 spec-file from a tutorial data set that I
 downloaded.
 
 2) I run the voxel-enclosing algorithm and after the mapping is completed I 
 exit
 caret.
 
 3) I load the PALS-B12 spec-file from the tutorial data set. Then I go to
 Display-Control and look under Volume Overlay/Underlay.
 
 In the past I was able to select metric under primary overlay and could then
 chose the .nii file containing the functional data, but this
 menu point does not appear anymore.
 
 
 Any help or suggestions on how to enable the metric option in the Overlay
 would be greatly appreciated.
 
 Best,
 Fred
 --
 ---
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Problemout of memory, please help me, thank you !

2015-01-12 Thread Donna Dierker
Scroll down to the Windows XP section:

http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/installation/caret5_installation.html

But that out of memory error still makes me wonder about the character set.  
Unless you have a user/pc that has only an english character set that you could 
try installing caret, I can't think of another way to test this.


On Jan 10, 2015, at 5:21 AM, 陈晨 chenchen_...@163.com wrote:

 
 Thank you very much!
 But even though I set the OS language to English, I still get an out of 
 memory,Caret terminating error message when loading 
 PALS_B12.RIGHT.DEMO.73730.spe
 I have downloaded and  extracted caret_distribution_Windows32.v5.65.zip
 Could you please tell me how to install Caret ?
 
 Chen
 
 
 At 2015-01-10 06:21:53,Timothy Coalson tsc...@mst.edu wrote:
 I think caret5 has trouble with non-english locales, if you are using a 
 different OS language, could you try setting the OS language to english and 
 trying it again?
 
 Tim
 
 
 On Fri, Jan 9, 2015 at 1:18 AM, 陈晨 chenchen_...@163.com wrote:
 ChenChen
 chenchen_...@163.com
 caret5 
 v5.65 AND Jan 27 2012 
 Windows 7 Ultimate 
 I get an out of memory,Caret terminating error message when loading 
 PALS_B12.RIGHT.DEMO.73730.spec.
 PALS_B12.RIGHT.DEMO.73730.zip
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Creating a surface-based atlas from a volumetric-atlas

2015-01-08 Thread Donna Dierker
On Jan 8, 2015, at 11:12 AM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
wrote:

 Hi Donna,
 
 Thank you very much for your response.
 
 You're right about the smoothing: the deformed individual surface in the 
 target directory is indeed the same as the
 original one (after the flattening). Clear.
 
 Would the deformed individual surface in the target directory be considered 
 native space? The only difference with
 the unregistered surface is that it is resampled onto standard mesh (but the 
 shape is the same).

The deformed surface is in the same *stereotaxic space* as the source surface.

The surfaces in the atlas directory are on the same mesh as the atlas surface.

 All goodies also
 project nicely to it. I also plotted them over one another and they perfectly 
 overlap. FYI: I am calculating an EcoG
 leadfield on the surface so the exact shape needs to stay the same.

I would think you'd want to work on native mesh, then (individual/source 
directory).

 And am I correct that the deformed template surfaces in the source directory 
 are resampled onto the individual
 mesh? In this directory, however, I only see the deformed spherical. I that 
 all the boxes Deform coordinate files to
 atlas so these I assume, are the deformed surfaces in the target directory. 
 I see however, no box atlas to individual.

Whatever files were in the atlas/target spec file when you ran registration 
will get resampled onto the source subject's mesh and saved in the source 
directory.  You can also apply the deform_map in the source dir to your atlas 
goodies, or just re-run registration using a spec file that points to the 
goodies you want resampled onto native mesh.

 I figured out how to export a paint file and input it in Matlab: I save it as 
 a labeled GIFTI file and open it in Matlab
 with gifti.m (from the GIFTI toolbox). I then have all 19 columns at once!

Now you have to ponder which are useful to you.

 Kind regards,
 Rikkert
 
 P.S. I noticed that in the Paxinos atlas, V5 is merged with the medial wall.
 
 
 On Thu, Jan 8, 2015 at 3:51 PM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 On Jan 7, 2015, at 10:16 AM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
 wrote:
 
 
  Hi Donna,
 
  I now have performed the spherical registration. What I would need is the 
  PHT atlas on the individual surface.
 
 If you checked the box requesting atlas to individual, then you should have 
 that in your individual/source directory.
 
  I am however, not sure how to interpret the generated surfaces. For 
  example, in the target directory there appear
  coordinate files with the prefix deformed and refer to the individual 
  monkey. Although the shape of the fiducial
  one is very similar to the shape of the (undeformed) individual surface, 
  there are some slight differences (for
  example, the medial wall is smoothed). What is the exact interpretation of 
  these surfaces?
 
 Yes, the medial wall gets smoothed at the beginning of flattening, if I 
 recall correctly (it's been a long time!), and a version of the sphere is 
 generated from the fiducial with the smoothed medial wall.  This is what goes 
 into registration.
 
 What you are seeing in the atlas directory is the source/individual 
 midthickness (with smoothed medial wall) resampled onto the target mesh.  It 
 will be slighter smoother than the source, and thus have slightly smaller 
 surface area.  But it will look a lot like the source.
 
  I noticed that I can view the different atlases defined on the template on 
  these deformed atlases. Does this
  mean that I have mapped them to the individual surface (which is what I 
  want)?
 
 It sounds like things went reasonably well.  Now you're at a point where you 
 need to decide:
 
 * do I work with atlas goodies on my native individual surface 
 (assuming you checked that box)? or
 * do I work with individual goodies on atlas surfaces
 
 The answer depends on what you're doing with the data.  If it's more 
 important that the surface stays native (e.g., for folding/areal 
 measurements), then the former option is better.  If it's handy to have 
 things on the atlas standard mesh for other reasons, then atlas-land is the 
 way to go.  With just one subject, native-land may be fine.  I did skim the 
 history below, but don't have the time to fully digest it at the moment.  But 
 maybe this will get you further along in your own thinking about next steps.
 
  One more question about exporting the results: When I export the deformed 
  surface with the atlas as a
  .label.gii file and read it into Matlab, I get a nice list of ROI names and 
  corresponding RGB and alpha
  values. However, the .cdata fields has 20 columns which I don't understand. 
  I hoped to get, for each
  vertex, an index into the ROI list. I've also tried to export it as a 
  surf.gii file, the then the .cdata field has
  4 columns whose meaning I don't understand as well. Could you please 
  clearify this for me and tell

Re: [caret-users] metric user scale with fixed step size

2014-12-11 Thread Donna Dierker
You have six bins, like below, but compressed to 0:1?  Hmmm.  Scratching my 
head on that one.


On Dec 11, 2014, at 9:30 AM, Caspar M. Schwiedrzik 
cschwie...@mail.rockefeller.edu wrote:

 Hi Donna, 
 I followed your advice and compressed the scale into 0:1. 
 One more question: While the colors are displayed correctly now, is there a 
 way to change the number of steps that the colorbar shows? Somehow, it always 
 shows 5 discrete levels, but I would like to get 6. 
 Thanks, Caspar
 
 
 2014-11-25 18:57 GMT-05:00 Caspar M. Schwiedrzik cschwie...@rockefeller.edu:
 Hi Donna, 
 thanks for pointing out the palette file format. I have been playing around 
 with this but am running into two strange issues. First, the I am getting a 
 fairly weird display that differs markedly from the one I originally created 
 in Freesurfer. For example, the maximal number appears much less frequently 
 in Caret than in FS. I am not sure where that problem could arise (I am 
 converting a w file into Caret format which for all other maps has worked 
 just fine). Secondly, and this may or may not be related, the colorbar only 
 displays one color, but all tickmarks. 
 Any ideas? 
 This is an excerpt from my palette file:
 
   _br1 = #7f
   _br2 = #ff3f00
   _br3 = #efff0f
   _br4 = #1fffdf
   _br5 = #004fff
   _br6 = #8f
   
 ***PALETTES bluered [6+]
  6.00 - _br1
  5.00 - _br2
  4.00 - _br3
  3.00 - _br4
  2.00 - _br5
  1.00 - _br6
 
 Thanks, Caspar
 
 
 2014-11-24 13:29 GMT-05:00 Caspar M. Schwiedrzik cschwie...@rockefeller.edu:
 Hi! 
 I am trying to display some metric data that has a fixed, meaningful step 
 size. Specifically, my data ranges from 1 to 6 and I would like to have each 
 step (1,2,3,4,5,6) assigned a specific color, and no intermediate steps 
 (e.g., 1.5). Is that possible? 
 Thanks, Caspar
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Open Spec File Problem

2014-12-03 Thread Donna Dierker
It was on ubuntu Linux.

I've heard of problems with Caret's display showing an incomplete/inaccurate 
surface.  Those ended up being corrected by an updated graphics driver.  But it 
sounds like you're not even getting that far.  It's just hanging on the load 
step, with very just a very simple surface.


On Dec 3, 2014, at 10:26 AM, Rongxiang Tang rongxiangt...@yahoo.com wrote:

 Hi Donna,
 
 Thanks for the file. I tried and it stopped around 25%. I closed all other 
 applications and it was still not working. Did you run under linux system? I 
 guess I may try other system.
 
 Catherine
 
 
 On Wednesday, December 3, 2014 9:50 AM, Donna Dierker 
 do...@brainvis.wustl.edu wrote:
 
 
 I was able to open the spec file you uploaded with no problem.  There are no 
 volume files and not that many surface files, so I doubt memory is an issue.
 
 I don't know why it is hanging on win7, 64bit caret 5.6*.
 
 Attached is an even tinier spec using a smaller subset of the same files 
 (single coord  topo).  Try putting in the same directory as those files and 
 see if the smaller subset loads.
 
 
 
 On Dec 3, 2014, at 9:22 AM, Rongxiang Tang rongxiangt...@yahoo.com wrote:
 
  Hi Donna,
  
  I tried both version 5.65 and version 5.616, and neither of them worked. I 
  am using a win7, 64bit. The spec file is 
  Human.PALS_B12.B1-12.DEPTH_ANALYSES_LEFT.73730.spec, which is included with 
  the software. I have uploaded the file as Human.PALS.LEFT.zip.
  
  Thanks,
  Catherine
  
  
  On Wednesday, December 3, 2014 8:58 AM, Donna Dierker 
  do...@brainvis.wustl.edu wrote:
  
  
  Hi Catherine,
  
  What caret version did you try?
  
  If the spec file is available on a website, can you point me to it?
  
  If not, could you zip up its contents and upload the zip file here:
  
  http://pulvinar.wustl.edu/cgi-bin/upload.cgi
  
  I'll see what happens when I try to load it.
  
  Donna
  
  
  On Dec 2, 2014, at 6:18 PM, Rongxiang Tang rongxiangt...@yahoo.com wrote:
  
   Dear All,
   
   I tried to open a spec file from the standard mesh atlases, but when I 
   selected the options in the GUI and clicked load, it immediately stopped 
   around 11%, and would not go on. My computer is in English setting, so 
   not sure what is the problem here. 
   
   Thanks,
   Catherine
  
   ___
   caret-users mailing list
   caret-users@brainvis.wustl.edu
   http://brainvis.wustl.edu/mailman/listinfo/caret-users
  
  
  
 
 


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] metric user scale with fixed step size

2014-11-26 Thread Donna Dierker
Hi Caspar,

If you converted from Freesurfer, rather than mapping independently in Caret, 
then this is puzzling.

I believe mris_convert can now convert directly to GIFTI (.gii) format, and I 
suspect this route might be more reliable than converting .w to .metric.  Have 
you tried this?  Probably easy and worth a shot.

As for the palete, your range needs to be compressed into the -1.0 to 1.0 
range, or in your case 0 to 1.


On Nov 25, 2014, at 5:57 PM, Caspar M. Schwiedrzik 
cschwie...@mail.rockefeller.edu wrote:

 Hi Donna, 
 thanks for pointing out the palette file format. I have been playing around 
 with this but am running into two strange issues. First, the I am getting a 
 fairly weird display that differs markedly from the one I originally created 
 in Freesurfer. For example, the maximal number appears much less frequently 
 in Caret than in FS. I am not sure where that problem could arise (I am 
 converting a w file into Caret format which for all other maps has worked 
 just fine). Secondly, and this may or may not be related, the colorbar only 
 displays one color, but all tickmarks. 
 Any ideas? 
 This is an excerpt from my palette file:
 
   _br1 = #7f
   _br2 = #ff3f00
   _br3 = #efff0f
   _br4 = #1fffdf
   _br5 = #004fff
   _br6 = #8f
   
 ***PALETTES bluered [6+]
  6.00 - _br1
  5.00 - _br2
  4.00 - _br3
  3.00 - _br4
  2.00 - _br5
  1.00 - _br6
 
 Thanks, Caspar
 
 
 2014-11-24 13:29 GMT-05:00 Caspar M. Schwiedrzik cschwie...@rockefeller.edu:
 Hi! 
 I am trying to display some metric data that has a fixed, meaningful step 
 size. Specifically, my data ranges from 1 to 6 and I would like to have each 
 step (1,2,3,4,5,6) assigned a specific color, and no intermediate steps 
 (e.g., 1.5). Is that possible? 
 Thanks, Caspar
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] metric user scale with fixed step size

2014-11-24 Thread Donna Dierker
I think you should be able to write a palette like that:

http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/file_formats/file_formats.html#paletteFile

The palette file format was originally intended to be compatible with AFNI, 
which may have palette editing capabilities of which I am unaware.  But if AFNI 
has updated their file format, it may no longer be readable by caret.  Most 
people edit the palette file via text editor or have a script generate it.  
Make sure interpolation is toggled off on the Metric Settings menu.

Another possibility is converting your metric data to paint (scalar vs 
integer values).  Those formats are slightly different, but you could probably 
manage the conversion via text editor or script.  If you go the paint route, 
then you'll also need an areacolor file.  Attributes: Area Color can help you 
out there.


On Nov 24, 2014, at 12:29 PM, Caspar M. Schwiedrzik 
cschwie...@mail.rockefeller.edu wrote:

 Hi! 
 I am trying to display some metric data that has a fixed, meaningful step 
 size. Specifically, my data ranges from 1 to 6 and I would like to have each 
 step (1,2,3,4,5,6) assigned a specific color, and no intermediate steps 
 (e.g., 1.5). Is that possible? 
 Thanks, Caspar
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] blacking out subcortical regions

2014-11-10 Thread Donna Dierker
There is a paint file that comes with many of the PALS-B12 datasets (e.g., Sept 
2006 tutorial) that has a medial wall column.  Use that as the primary overlay 
with your metric or other overlay as the secondary overlay.  This grays out the 
medial wall.


On Nov 9, 2014, at 10:43 AM, Frédéric Roux f.r...@bcbl.eu wrote:

 Hi there,
 
 I've finally managed to get to visualize my MEG source-reconstruction data 
 using caret !!!
 So far I am very satisfied with the way everything looks.
 
 The only thing that's missing is a way to black out the sub-cortical areas? 
 I've seen
 people do this regularly in their publications and I would like to do the 
 same as from
 what I understand the PALS-B12 surface is only meant for cortical 
 representations?
 
 I could basically try to null all the values which lie below a certain 
 coordinate of
 the Z-axis in my data, but I was wondering if there is a simple and easier 
 way to do
 it in in Caret.
 
 Any help or suggestions would be highly appreciated.
 
 Thanks.
 
 Fred
 ---
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Creating a surface-based atlas from a volumetric-atlas

2014-11-05 Thread Donna Dierker
On Nov 5, 2014, at 7:05 AM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
wrote:

 
 Hi Donna,
 
 I've constructed a .borderproj file (and a .bordercolor file) containing the 
 23 LANDMARKS that correspond to the 23 LANDMARKS
 in the file 
 Macaque.F99UA1.LR.AVERAGE.LANDMARKS_for_FULL-HEMISPHERE_RegWithPHT00.73730.borderproj
  with
 corresponding .bordercolor filee Macaque.LANDMARKS_for_FULL-HEM.bordercolor 
 and I think there is a good correspondence
 between the individual and template LANDMARKS.
 
 To perform the spherical registration however, I need the template sphere of 
 the left hemisphere, while I can only find the one of the
 right hemisphere (Macaque.ATLAS.sphere.6.73730.coord). Do you know where I 
 can find it? (Maybe I can obtain it by mirroring, but
 I have no idea how this could be done).

Use the right for both.  Caret with x-flip the left hems when it does its 
thing, provided the source spec file correctly identifies the hemisphere.

In this way, the resulting left and right hem resampled surfaces are in 
register with one another.

Just make sure there is a hemisphere tag that correctly identifies your left 
hem, and Caret will do the rest.

 And I have a couple of general questions:
 
 1. In which file-types are the sulcal-depths stored? (this applies both to 
 individual as well as template data)?  I quess this should be a
 .surface_shape file, but I cannot find it. I ask this because I want to make 
 a new directory which only contains those files that I need.

Should be surface_shape, but depending on what spec file you are using, you 
might not have one.  Or if it is more recent, it might be shape.gii.  The Sept 
2006 tutorial has a surface_shape for the macaque F99 in the MACAQUE subdir, 
for example.

If your individual was segmented in Caret, then it should have a sulcal depth 
map from the prepare for flattening step.  This should be in a surface shape 
file.  If you used something else, then are you sure you need the sulcal depth 
map?  If you just want something that looks like it for visualization, the 
Freesurfer .sulc is a good substitute.

 2. In connection with this: What is the best way to organize the data files? 
 I now make different folders (one  for flattening, one for
 landmarks, and one for the registration, each containing a different set of 
 data files and a single .spec file). Should I put everything
 in one folder and make different .spec files for the different procedures?

One hem, different specs.

 3. Why are there .topology files? (since the topology is determined by the 
 triangulation, which is stored in .coord files)

Coords contain only x,y,z coordinates.  Neighbor relationships are in the .topo 
files.

 4. After registering, how can I project any goodies defined on the template 
 to the individual surface?

Make sure the goodies are listed in the target atlas spec file, and make sure 
you check the box that indicates atlas to individual.  Then, whatever you 
specify in the target atlas gets resampled into your source directory onto 
source mesh.

 For example, I would like to
 view the PHT parcellation on the individual surface (or at least have a label 
 for each vertex so that I can view it in matlab).
 
 I would appreciate it if you find the time to provide some comments.
 
 Kind regards,
 Rikkert
 
  
 
 
 
 
 
 On Tue, Nov 4, 2014 at 11:09 PM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 I doubt you'll be able to convert the deformation map to Caret, but it won't 
 be necessary, if Spherical Demons can resample what you need onto the source 
 or target mesh, after it computes the registration.
 
 If it's not too hard, give it a shot and let us know how it goes.
 
 
 On Nov 4, 2014, at 12:59 PM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
 wrote:
 
  Hi Donna,
 
  Thanks for the link.
 
  Yes, I can imagine that it requires quit some effort (and experience)
  to register a macaque surface using so many landmarks. I would
  like to parcellate my macaque cortex using a (high-resolution)
  atlas.
 
  So what about if I do the following: I convert the freesurfer files for both
  my individual caret-surface and for the F99 standard mesh, register
  them in freesurfer (or Spherical Demons) and then convert the deformation
  map to Caret-format (if that's possible).
 
  Would this be a good way to go?
 
  Kind regards,
  Rikkert
 
 
 
  On Tue, Nov 4, 2014 at 5:54 PM, Donna Dierker do...@brainvis.wustl.edu 
  wrote:
  Try this one:
 
  http://brainmap.wustl.edu/pub/donna/ATLASES/HUMAN/PALS_B12/Human_PALS_B12.LR.MEN_WOMEN.AVG-LANDMARKS_Core6.SPHERE.borderproj
  login pub
  password download
 
  As I recall, though it's been a long time, the GUI Caret took either border 
  or borderproj, but the command line caret_command wanted a borderproj.
 
  You indicated you know this is a human target, and you're just getting the 
  feel for it.  With monkeys, we use more than the core 6 landmarks.  Some of 
  the older tutorials have figures showing more

Re: [caret-users] Creating a surface-based atlas from a volumetric-atlas

2014-11-04 Thread Donna Dierker
Try this one:

http://brainmap.wustl.edu/pub/donna/ATLASES/HUMAN/PALS_B12/Human_PALS_B12.LR.MEN_WOMEN.AVG-LANDMARKS_Core6.SPHERE.borderproj
login pub
password download

As I recall, though it's been a long time, the GUI Caret took either border or 
borderproj, but the command line caret_command wanted a borderproj.

You indicated you know this is a human target, and you're just getting the feel 
for it.  With monkeys, we use more than the core 6 landmarks.  Some of the 
older tutorials have figures showing more of the sulci.

There are many other tools for surface-based registration these days (e.g., 
ones that use sulc patterns to match without the need to draw landmarks).  The 
connectome project uses MSM:

http://www.ncbi.nlm.nih.gov/pubmed/24939340

You can still use Caret, but just making sure you know there are alternatives.


On Nov 4, 2014, at 5:23 AM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
wrote:

 Hi Donna,
 
 To get a feeling for the registration process in Caret, I start with 
 performing a spherical registration of a human surface
 to the PALS-atlas. I have extracted the surface and generated a border 
 projection file containing the required cuts and
 landmarks. However, when I want to perform the registration, I get a massage 
 saying that Caret cannot find the target
 border projection file. I used this file:
 
 http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=6057499
 
 and indeed, it seems that there is no such file (nor coordinate files for the 
 fiducial and inflated surfaces). Are some
 files missing or do I do something wrong?
 
 Thanks and kind regards,
 Rikkert 
 
 
 
 
 On Fri, Oct 31, 2014 at 4:27 PM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 On Oct 30, 2014, at 11:29 AM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
 wrote:
 
  Dear Donna,
 
  Thanks for your fast response, I appreciate it!
 
  My situation is as follows:
 
  On the one hand, I have a group-averaged T1-weighted image, together with a 
  volumetric atlas (that is, an integer labeling of the
  voxels) as well as a structural connectivity matrix (obtained via 
  fiber-tracking on the group-averaged diffusion-weighted image). On
  the other hand, I have a T1-weighted image of an individual monkey. My aim 
  is to obtain a surface atlas (derived from volumetric atlas)
  for the individual monkey.
 
 This is an interesting scenario, and I've not encountered it before.
 
  Could I first to a volumetric-registration of the individual image to the 
  group-averaged image and subsequently project the induced
  labeling of the voxels of the individual image to the individual surface?
 
 This seems reasonable and not too hard.  The lower variability in macaque 
 folding may make it less problematic than for humans.
 
  Or do I have to extract the surface of the group-averaged
  image, project the volumetric atlas to it, and subsequently perform a 
  spherical registration of the individual surface to the group-
  averaged surface?
 
 People do extract surfaces from group averaged anatomical volumes for some 
 purposes, but I doubt it will be worth it in this case.  I hope others will 
 voice their opinions if they feel otherwise.
 
  The first approach seems more straightforward, but I don't know if it is 
  correct. Also, a complication with the second approach is that
  the extracted surface from the group-averaged image looks worse than that 
  extracted from the individual image (it is entirely ok, except
  for that the primary visual cortex has a large part missing at the medial 
  side).
 
 This is to be expected.  A more reasonable thing to do if you want an average 
 surface is generate surfaces for the individuals and compute an average from 
 them.  You probably don't have those surfaces, so honestly I'd try the first 
 option and vet the resulting mapping using the T1+contour+volumetric-overlay 
 view.
 
 Still another option would be to use surface based registration to get your 
 individual monkey in register with the F6 atlas (part 3, 
 http://brainvis.wustl.edu/wiki_linked_files/documentation/Caret_Tutorial_Sep22.pdf)
  or Donald McLaren's population average macaque atlas 
 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2659879).  You could do 
 something like this:
 
 * volumetrically warp your atlas goodies to match the mean anatomical McLaren 
 image.
 * surface-based register your individual macaque atlas to the McLaren 
 standard mesh surface.
 * map your warped atlas goodies to the McLaren population surface.
 * view your mapped results on your individual's standard mesh surface.
 
 But that second step isn't trivial, and your easier route might suffice.  So 
 I'd give that a go first.
 
  And Donna, could you please tell me how to create a paint file from a 
  nifty-file? (the atlas I have is saved as a nifti-file)
 
 In caret5, Attributes: Map Volume to Surface and choose paint.  But getting 
 the color lookup is a bit messy.  The newer CIFTI format contains a label 
 lookup table

Re: [caret-users] Creating a surface-based atlas from a volumetric-atlas

2014-11-04 Thread Donna Dierker
I doubt you'll be able to convert the deformation map to Caret, but it won't be 
necessary, if Spherical Demons can resample what you need onto the source or 
target mesh, after it computes the registration.

If it's not too hard, give it a shot and let us know how it goes.


On Nov 4, 2014, at 12:59 PM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
wrote:

 Hi Donna, 
 
 Thanks for the link.
 
 Yes, I can imagine that it requires quit some effort (and experience) 
 to register a macaque surface using so many landmarks. I would 
 like to parcellate my macaque cortex using a (high-resolution) 
 atlas.
 
 So what about if I do the following: I convert the freesurfer files for both
 my individual caret-surface and for the F99 standard mesh, register 
 them in freesurfer (or Spherical Demons) and then convert the deformation
 map to Caret-format (if that's possible).
 
 Would this be a good way to go? 
 
 Kind regards,
 Rikkert
 
 
 
 On Tue, Nov 4, 2014 at 5:54 PM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Try this one:
 
 http://brainmap.wustl.edu/pub/donna/ATLASES/HUMAN/PALS_B12/Human_PALS_B12.LR.MEN_WOMEN.AVG-LANDMARKS_Core6.SPHERE.borderproj
 login pub
 password download
 
 As I recall, though it's been a long time, the GUI Caret took either border 
 or borderproj, but the command line caret_command wanted a borderproj.
 
 You indicated you know this is a human target, and you're just getting the 
 feel for it.  With monkeys, we use more than the core 6 landmarks.  Some of 
 the older tutorials have figures showing more of the sulci.
 
 There are many other tools for surface-based registration these days (e.g., 
 ones that use sulc patterns to match without the need to draw landmarks).  
 The connectome project uses MSM:
 
 http://www.ncbi.nlm.nih.gov/pubmed/24939340
 
 You can still use Caret, but just making sure you know there are alternatives.
 
 
 On Nov 4, 2014, at 5:23 AM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
 wrote:
 
  Hi Donna,
 
  To get a feeling for the registration process in Caret, I start with 
  performing a spherical registration of a human surface
  to the PALS-atlas. I have extracted the surface and generated a border 
  projection file containing the required cuts and
  landmarks. However, when I want to perform the registration, I get a 
  massage saying that Caret cannot find the target
  border projection file. I used this file:
 
  http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=6057499
 
  and indeed, it seems that there is no such file (nor coordinate files for 
  the fiducial and inflated surfaces). Are some
  files missing or do I do something wrong?
 
  Thanks and kind regards,
  Rikkert
 
 
 
 
  On Fri, Oct 31, 2014 at 4:27 PM, Donna Dierker do...@brainvis.wustl.edu 
  wrote:
  On Oct 30, 2014, at 11:29 AM, HINDRIKS, RIKKERT 
  rikkert.hindr...@upf.edu wrote:
 
   Dear Donna,
  
   Thanks for your fast response, I appreciate it!
  
   My situation is as follows:
  
   On the one hand, I have a group-averaged T1-weighted image, together with 
   a volumetric atlas (that is, an integer labeling of the
   voxels) as well as a structural connectivity matrix (obtained via 
   fiber-tracking on the group-averaged diffusion-weighted image). On
   the other hand, I have a T1-weighted image of an individual monkey. My 
   aim is to obtain a surface atlas (derived from volumetric atlas)
   for the individual monkey.
 
  This is an interesting scenario, and I've not encountered it before.
 
   Could I first to a volumetric-registration of the individual image to the 
   group-averaged image and subsequently project the induced
   labeling of the voxels of the individual image to the individual surface?
 
  This seems reasonable and not too hard.  The lower variability in macaque 
  folding may make it less problematic than for humans.
 
   Or do I have to extract the surface of the group-averaged
   image, project the volumetric atlas to it, and subsequently perform a 
   spherical registration of the individual surface to the group-
   averaged surface?
 
  People do extract surfaces from group averaged anatomical volumes for some 
  purposes, but I doubt it will be worth it in this case.  I hope others will 
  voice their opinions if they feel otherwise.
 
   The first approach seems more straightforward, but I don't know if it is 
   correct. Also, a complication with the second approach is that
   the extracted surface from the group-averaged image looks worse than that 
   extracted from the individual image (it is entirely ok, except
   for that the primary visual cortex has a large part missing at the medial 
   side).
 
  This is to be expected.  A more reasonable thing to do if you want an 
  average surface is generate surfaces for the individuals and compute an 
  average from them.  You probably don't have those surfaces, so honestly I'd 
  try the first option and vet the resulting mapping using the 
  T1+contour+volumetric-overlay view.
 
  Still

Re: [caret-users] Creating a surface-based atlas from a volumetric-atlas

2014-10-30 Thread Donna Dierker
On Oct 29, 2014, at 10:56 AM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
wrote:

 
 Dear all,
 
 I have an averaged T1-image and co-registered volumetric atlas of the macaque 
 brain (which has been digitized by a collaborator) and want to derive from it 
 a surface-based
 atlas. Subsequently, I would like to use this atlas to get a parcellation of 
 the cortical surface of an individual macaque brain). How should I approach 
 this problem?
 
 I have extracted the cortical surface from the averaged T1-weigthed scan.  
 Should I now
 just label each cortical vertex by determining to which ROI it belongs? And 
 what if some vertices fall outside all ROI's? Also, the result does not look 
 so smooth as existing atlases.

It sounds like you need to map the volume(s) onto the surface.  It also sounds 
like these are discrete parcellations (ROI/label/paint) as opposed to 
probabilistic atlases, since it sounds like it is an individual monkey's data, 
rather than group data.  It would be helpful to clarify this.

Assuming it is ROI/label (i.e., each intensity value -- e.g., 1, 2, 3, … -- 
corresponds to a region -- e.g., cingulate, arcuate, …), then I would map it as 
a paint volume.  I believe doing so constrains the mapping algorithms, but I am 
not certain.

If you load your anatomical T1 with your surfaces and toggle on the surface 
contours (Volume Surface Outline, on the D/C page selection), then you can 
overlay the volumetric atlas over these two anatomical underlays (T1+surface 
contours) to look for regions where the surface does not intersect the atlas.  
I see three choices:

* fix the volumetric atlas data
* fix the surfaces, so the intersection is improved
* accept the fact that there are real holes in your data

You will be better equipped to make that choice when you are looking at 
T1+surface contours+volumetric-atlas.

 And to parcellate an individual macaque brain, can I register both the 
 surfaces (that is, the template surface and the individual surface) 
 spherically?

Registering an individual monkey brain to a monkey atlas (e.g., F99) isn't 
really parcellating it, but there are parcellations already on the F99 atlas, 
so if you use spherical registration to register your monkey to F99, then you 
could look at the F99 parcellations overlaid on your monkey's surface.  But 
it's not a quick or easy process.  You need to draw registration borders.  
(Though there are other registration algorithms out there that use sulcal maps 
and/or other data to automatically derive the deformation.  I encourage others 
to chime in if they ones they have used and found not too hard.)  How would you 
be using the registered surface?

(Sorry for the delayed reply, but it wasn't a quick one. ;-)

 Thanks a lot,
 Rikkert
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] threshold of metric file

2014-10-30 Thread Donna Dierker
I'm sorry I don't know the answer to the Freesurfer question, but someone else 
might know.  And I confess I don't understand the rationale for thresholding at 
that value, possibly because I am unfamiliiar with the contents of those files.

So I'm keeping the responsibility for ensuring a reasonable threshold on you. 
;-)  But If it's each subject, then it's probably worth scripting it using 
these caret_command tools:

  caret_command -surface-region-of-interest-selection  
 [-metric  metric-file-name  column  min  max SEL-TYPE]

  caret_command -surface-roi-statistical-report  

And you can use each subject's surface/topo for that surface area calculation.  
Your min is your threshold and max something like 999.  Your report will 
include the area of the suprathreshold regions, and you can grep that line from 
the resulting report file.


On Oct 29, 2014, at 11:30 PM, wangzhiwei3233 wangzhiwei3...@126.com wrote:

 Hi, Donna,
 
 My purpose is counting activated area size on each subject.
 
 I did do significance test using Freesurfer on individual level. The results 
 contained many files,for example, sig.nii.gz and Fsig.nii.gz 
 corresponding to result of t and f test respectively. Is that right?
 But I do not know how to determine tha suprathreshold on subject level as you 
 mentioned. 
 
 For display and counting area size, I converted the results file(sig.nii.gz) 
 to Caret. Then I count area size on Caret. When I counted area size, I 
 selected a uniform threshold 1.3, i.e.-log10(0.05) for each subject. So I set 
 the scale to 1.3 ~ maxminum on Caret. Is this right?
 
 So I could draw a border around the  suprathreshold region and generated a 
 paint file. I  got the area size of the region using the paint file.
 
 Is there any step wrong?
 
 Thanks!
 Zhiwei
 
 
 
 At 2014-10-30 00:27:14, Donna Dierker do...@brainvis.wustl.edu wrote:
 Hi wangzhiwei,
 
 I'm a little confused by the question.  There mention of area size and 
 scales hints that there might be a confusion between tools used for 
 quantification / significance testing and those used for display purposes.
 
 Freesurfer has its own tools for significance testing, so you could use 
 those.  We often use threshold-free cluster enhancement for that purpose, 
 which finds the significance threshold.  Suprathreshold area can be computed 
 once this threshold has been determined.
 
 But usually when I make a figure, I generate border around the 
 suprathreshold regions and display these bordersover the real t- or f-map, 
 using a scale that corresponds to my alpha (e.g., .05) divided by two (since 
 I usually do both right and left hem tests).  I compute this t or f-stat 
 using my n / degrees of freedom.
 
 So the significance testing and display steps are separate, the way I do it.
 
 Now you might not be going as far as significance testing.  Sometimes you 
 just want to look at some preliminary data -- particularly for a single 
 subject.  A good start might be to understand if this is a single subject, 
 group results, what kind of statistic.
 
 And certainly not everyone does this the way I do, so it would be helpful 
 for others to weigh in with their viewpoints/conventions.
 
 Donna
 
 
 On Oct 28, 2014, at 9:53 PM, wangzhiwei3233 wangzhiwei3...@126.com wrote:
 
  Hi, experts,
  I converted fMRI results derived from freesurfer to caret, and not I want 
  to count activation areas on caret. So there is a problem of threshold and 
  scale.
  
  Auto scale range is 0~60. I found that the area size was different when 
  using scale 1.3~4 from when using scale 1.3~60. And the latter one was 
  smaller. However , in the latter case(1.3~60),  the value of a point that 
  was next to the border of activation area but in non-activation area was a 
  little bit lager than the threshold 1.3. 
  
  How to set the scale to guarantee the activation accurate?
  
  Best!
  
  
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] threshold of metric file

2014-10-29 Thread Donna Dierker
Hi wangzhiwei,

I'm a little confused by the question.  There mention of area size and scales 
hints that there might be a confusion between tools used for quantification / 
significance testing and those used for display purposes.

Freesurfer has its own tools for significance testing, so you could use those.  
We often use threshold-free cluster enhancement for that purpose, which finds 
the significance threshold.  Suprathreshold area can be computed once this 
threshold has been determined.

But usually when I make a figure, I generate border around the suprathreshold 
regions and display these bordersover the real t- or f-map, using a scale 
that corresponds to my alpha (e.g., .05) divided by two (since I usually do 
both right and left hem tests).  I compute this t or f-stat using my n / 
degrees of freedom.

So the significance testing and display steps are separate, the way I do it.

Now you might not be going as far as significance testing.  Sometimes you just 
want to look at some preliminary data -- particularly for a single subject.  A 
good start might be to understand if this is a single subject, group results, 
what kind of statistic.

And certainly not everyone does this the way I do, so it would be helpful for 
others to weigh in with their viewpoints/conventions.

Donna


On Oct 28, 2014, at 9:53 PM, wangzhiwei3233 wangzhiwei3...@126.com wrote:

 Hi, experts,
 I converted fMRI results derived from freesurfer to caret, and not I want to 
 count activation areas on caret. So there is a problem of threshold and scale.
 
 Auto scale range is 0~60. I found that the area size was different when using 
 scale 1.3~4 from when using scale 1.3~60. And the latter one was smaller. 
 However , in the latter case(1.3~60),  the value of a point that was next to 
 the border of activation area but in non-activation area was a little bit 
 lager than the threshold 1.3. 
 
 How to set the scale to guarantee the activation accurate?
 
 Best!
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Hi res recon

2014-10-14 Thread Donna Dierker
Hi Aditya,

On monkeys, yes.  Humans, no.  The SureFit algorithm that is in Caret's 
segmentation feature was designed for cubic 1mm human data.  It worked 
reasonably well on higher res monkey data, but some of the subroutines will 
likely break on higher res human data (e.g., disconnecting eye, skull, 
hindbrain).

I'd turn all error correction features off and sanity check the initial 
segmentation.  If the skull, eye, or hindbrain is still connected, then 
resolving that issue should precede the error correction steps.  Unfortunately, 
that will likely take some work.

Donna


On Oct 14, 2014, at 5:25 AM, Dr. Aditya Tri Hernowo, Ph.D 
adityatrihern...@gmail.com wrote:

 Dear users  experts,
 
 Does anyone have any experience with reconstructing the cortex on 0.5mm 
 resolution T1 images? I am still having problems with the very long time it 
 takes to perform automatic error correction (more than 3 hours before the 
 software finally crashed).
 
 Regards,
 
 Aditya Hernowo
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] average fs_LR32k mesh

2014-10-01 Thread Donna Dierker
If you have individual func.gii files, then it makes sense to view them on your 
individual's midthickness surface.  If you have group data, you might use an 
atlas dataset as your viewing substrate.  It doesn't look like there are any 
32k versions of Conte69 available for caret5 format:

http://sumsdb.wustl.edu/sums/directory.do?id=8293668dir_name=Conte69_caret5

But there is a 32k version for workbench format:

http://sumsdb.wustl.edu/sums/directory.do?id=8293673dir_name=32k_mesh_workbench

You can view the surf.gii files in the workbench directory using caret5, but 
not all the features in caret_command will work with them.

You might also find 32k group average midthickness, inflated, etc. in various 
Human Connectome Project (HCP) datasets, e.g., the datasets listed under 
Connectome Workbench Data here:

https://db.humanconnectome.org/data/projects/HCP_500

But these are also in workbench -- not caret5 formats.


On Oct 1, 2014, at 2:11 PM, Timothy Coalson tsc...@mst.edu wrote:

 The script quoted in the post you linked specifically resamples the surface, 
 so if that is what you did, the output from it is the midthickness in 
 fs_LR32k.  Other surface types can be resampled the same way.  Caret5 will 
 load .surf.gii files just fine, you don't need to convert back to topo/coord 
 (but you can, with caret_command -file-convert -sc ...).
 
 Tim
 
 
 On Wed, Oct 1, 2014 at 11:22 AM, Alexander Schäfer aschae...@cbs.mpg.de 
 wrote:
 Dear List,
 
 A previous post
 (http://brainvis.wustl.edu/pipermail/caret-users/2014-August/006194.html)
 on this list helped me to successfully downsample my fs_LR164k
 (func.gii) data to fs_LR32k. Now, I have the problem that I only have
 the provided 32k sphere file to overlay the data onto. Is there some
 average fs_LR32k mesh (.coord, .topo for inflated, midthickness, etc)
 that could be shared by someone?
 
 Thank you,
 Alex Schaefer
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] projecting functional MRI to gii surfaces

2014-09-28 Thread Donna Dierker
Hi Yan,

This question should be directed to hcp-users, rather than caret-users.  But be 
warned that many of the HCP experts are tied up in meetings today and tomorrow.

I'm not sure this is the issue with 3), but note that MRIcron probably can't 
read CIFTI files (composites of cortical surface and subcortical/cerebellar 
volumetric data).  These are named like *.dscalar.nii, *.dlabel.nii, and 
several other flavors.  These aren't NIFTI volumes like many software packages 
expect.  For that matter, caret5 won't read them, either.  You need workbench, 
which is supported by hcp-users.

You're already on that list, no?

Donna


On Sep 27, 2014, at 10:25 AM, Tang, Yan yan.t...@ttu.edu wrote:

 hello, eveyone
 I am beginner. I want to download the DTI data  in the Human Connectome 
 Projet. But I are conflused by the processed data and unprocessed data. 
 1)The unprocessed data contains the NIFTI-formatted pairs (L-R, R-L) of 
 diffusion scans for all directions (95,96,and 97). And processed data only 
 include diffusion weighting (bvals), direction (bvecs), time series, brain 
 mask, and a file (grad_dev.nii.gz). Why? what did you only keep? 
 2)In the processed data, whether the file of *_SBRef.nii describe the B0?
 3)I download the processed data, but I couldn't use the Micron to open the 
 data.nii. Why? I download two subjects. Both subjects couldn't be opened.
 
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [do...@brainvis.wustl.edu]
 Sent: Thursday, August 28, 2014 3:47 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 Hi Yan,
 
 There are lots of ways to split up the brain -- parcellations.  For 
 example, the Conte69 atlas 
 (http://brainvis.wustl.edu/wiki/index.php/Caret:Conte69_Atlas) comes with two 
 label.gii files per hemisphere, e.g.:
 
 parcellations_VGD11b.R.164k_fs_LR.label.gii
 RSN-networks.R.164k_fs_LR.label.gii
 
 The first includes two parcellations:
   Composite Parcellation-rh (FRB08_OFP03_retinotopic)
   Brodmann rh (from colin.R via pals_R-to-fs_LR)
 
 The second includes four parcellations:
   7 RSN Networks (YKS11 - Yeo et al., JNP, 2011)
   17 RSN Networks (YKS11 - Yeo et al., JNP, 2011)
   RSN consensus communities (holes filled) (PCN11 - Power_Neuron11)
   RSN consensus communities (PCN11 - Power_Neuron11)
 
 And lots more have been published since, and will continue to be published. 
 ;-)
 
 So you have to decide which one you want.  Then you can either load that 
 parcellation in wb_view (or caret5) and click on nodes interactively, or if 
 you want to write a script that queries regions, there is a lookup table that 
 maps the label key to a label name in the front of the label file, e.g.:
 
Label Key=0 Red=0.667 Green=0.667 Blue=0.667
 Alpha=0![CDATA[???]]/Label
Label Key=1 Red=1 Green=1 Blue=1
 Alpha=1![CDATA[u1_Unassigned]]/Label
Label Key=2 Red=0.502 Green=0.502 Blue=0.502
 Alpha=1![CDATA[u2_Ventral_frontal_temporal]]/Label
Label Key=3 Red=1 Green=0 Blue=0
 Alpha=1![CDATA[a3_Default_mode]]/Label
Label Key=4 Red=0 Green=1 Blue=1
 Alpha=1![CDATA[a4_Hand_somatosensory-motor]]/Label
Label Key=5 Red=0 Green=0 Blue=1
 Alpha=1![CDATA[a5_Visual]]/Label
Label Key=6 Red=0.961 Green=0.961 Blue=0.059
 Alpha=1![CDATA[a6_Fronto-parietal_task_control]]/Label
Label Key=7 Red=0 Green=0.502 Blue=0.502
 Alpha=1![CDATA[a7_Ventral_attention]]/Label
Label Key=8 Red=0 Green=0.275 Blue=0.157
 Alpha=1![CDATA[a8_Caudate_putamen]]/Label
Label Key=9 Red=1 Green=0.722 Blue=0.831
 Alpha=1![CDATA[a9_Superior_temporal_gyrus]]/Label
Label Key=10 Red=0.675 Green=0.675 Blue=0.675
 Alpha=1![CDATA[u10_Inferior_temporal_pole]]/Label
 
 You can also export the label table as ASCII text.  But each vertex is 
 associated with one of these keys in the label.gii file.
 
 Donna
 
 
 On Aug 28, 2014, at 2:27 PM, Tang, Yan yan.t...@ttu.edu wrote:
 
 I still have problem. How can I know which brain region is every vertex 
 belonged to?
 
 From: Tang, Yan
 Sent: Friday, August 15, 2014 12:27 PM
 To: Caret, SureFit, and SuMS software users
 Subject: RE: [caret-users] projecting functional MRI to gii surfaces
 
 Thank all of you. Thank you very much.
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [do...@brainvis.wustl.edu]
 Sent: Wednesday, August 13, 2014 2:27 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 Hi Yan,
 
 I'm not sure about wb_import, but I know it won't downsample for you.  This 
 will, I hope:
 
 http://brainmap.wustl.edu/pub/donna/ATLASES/HUMAN/FSLR/downsample164_to_32k.zip
 login pub
 password download
 
 There is a script

Re: [caret-users] Different color for displaying two T-images

2014-09-23 Thread Donna Dierker
Hi Yu,

I juse checked, and you can have only one palette at a time, even though you 
can have multiple metric overlays.  When you tweak metric settings, it affects 
all metric overlays (at least the palette).

There is a way to channel up to three metrics into a RGB map (Attributes: 
Metric: Convert metric to RGB), but I think you'll have to split your maps into 
positive and negative that way, so you're not any better off than having two 
t-maps.

The other solution that comes to mind is thresholding one of the t-maps at +/- 
some threshold and drawing borders around the resulting clusters (something 
caret_command can automate).  The borders can be overlaid on the other t-map.

Finally, you can multiply the two t-maps together to find out where they 
are/aren't on the same page.

Donna


On Sep 23, 2014, at 3:22 AM, BanYu sho...@live.cn wrote:

 Dear Caret users,
 I'm new to Caret, and I'm wondering if I can use different color palettes to 
 display 2 T-images together. 
 Specifically, for one T-image, I'd like to use 'hot' color for positive 
 activations and blue for negative activations. 
 And for the another T-image, I want to use red color for positive activations 
 and green color for negative actications. 
 Any information would be greatly appreciated!
 Many thanks and best regards,
 Yu
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] projecting functional MRI to gii surfaces

2014-08-28 Thread Donna Dierker
Hi Yan,

There are lots of ways to split up the brain -- parcellations.  For example, 
the Conte69 atlas 
(http://brainvis.wustl.edu/wiki/index.php/Caret:Conte69_Atlas) comes with two 
label.gii files per hemisphere, e.g.:

parcellations_VGD11b.R.164k_fs_LR.label.gii
RSN-networks.R.164k_fs_LR.label.gii

The first includes two parcellations:
Composite Parcellation-rh (FRB08_OFP03_retinotopic)
Brodmann rh (from colin.R via pals_R-to-fs_LR)

The second includes four parcellations:
7 RSN Networks (YKS11 - Yeo et al., JNP, 2011)
17 RSN Networks (YKS11 - Yeo et al., JNP, 2011)
RSN consensus communities (holes filled) (PCN11 - Power_Neuron11)
RSN consensus communities (PCN11 - Power_Neuron11)

And lots more have been published since, and will continue to be published. ;-)

So you have to decide which one you want.  Then you can either load that 
parcellation in wb_view (or caret5) and click on nodes interactively, or if you 
want to write a script that queries regions, there is a lookup table that maps 
the label key to a label name in the front of the label file, e.g.:

 Label Key=0 Red=0.667 Green=0.667 Blue=0.667
Alpha=0![CDATA[???]]/Label
 Label Key=1 Red=1 Green=1 Blue=1
Alpha=1![CDATA[u1_Unassigned]]/Label
 Label Key=2 Red=0.502 Green=0.502 Blue=0.502
Alpha=1![CDATA[u2_Ventral_frontal_temporal]]/Label
 Label Key=3 Red=1 Green=0 Blue=0
Alpha=1![CDATA[a3_Default_mode]]/Label
 Label Key=4 Red=0 Green=1 Blue=1
Alpha=1![CDATA[a4_Hand_somatosensory-motor]]/Label
 Label Key=5 Red=0 Green=0 Blue=1
Alpha=1![CDATA[a5_Visual]]/Label
 Label Key=6 Red=0.961 Green=0.961 Blue=0.059
Alpha=1![CDATA[a6_Fronto-parietal_task_control]]/Label
 Label Key=7 Red=0 Green=0.502 Blue=0.502
Alpha=1![CDATA[a7_Ventral_attention]]/Label
 Label Key=8 Red=0 Green=0.275 Blue=0.157
Alpha=1![CDATA[a8_Caudate_putamen]]/Label
 Label Key=9 Red=1 Green=0.722 Blue=0.831
Alpha=1![CDATA[a9_Superior_temporal_gyrus]]/Label
 Label Key=10 Red=0.675 Green=0.675 Blue=0.675
Alpha=1![CDATA[u10_Inferior_temporal_pole]]/Label

You can also export the label table as ASCII text.  But each vertex is 
associated with one of these keys in the label.gii file.

Donna


On Aug 28, 2014, at 2:27 PM, Tang, Yan yan.t...@ttu.edu wrote:

 I still have problem. How can I know which brain region is every vertex 
 belonged to?  
 
 From: Tang, Yan
 Sent: Friday, August 15, 2014 12:27 PM
 To: Caret, SureFit, and SuMS software users
 Subject: RE: [caret-users] projecting functional MRI to gii surfaces
 
 Thank all of you. Thank you very much.
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [do...@brainvis.wustl.edu]
 Sent: Wednesday, August 13, 2014 2:27 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 Hi Yan,
 
 I'm not sure about wb_import, but I know it won't downsample for you.  This 
 will, I hope:
 
 http://brainmap.wustl.edu/pub/donna/ATLASES/HUMAN/FSLR/downsample164_to_32k.zip
 login pub
 password download
 
 There is a script in there you will need to tweak to put in your pathnames 
 and subject list.  I tried it on some freesurfer_to_fs_LR output I had lying 
 around, and it worked.  The zip file also contains the spheres you need:
 
 ExtractDir=/home/donna/downsample164_to_32k
 SubjectDir=/mnt/myelin/donna/SUBJ/fs_LR_output_directory
 ResamplingMethod=BARYCENTRIC
 
 for Subject in $SubjList
 do
 for Hemisphere in L R
 do
   
 CoordInput=$SubjectDir/$Subject/$Subject.$Hemisphere.midthickness_orig.164k_fs_LR.coord.gii
   TopoInput=$SubjectDir/$Subject/$Subject.$Hemisphere.164k_fs_LR.topo.gii
   
 SurfaceInput=$SubjectDir/$Subject/$Subject.$Hemisphere.midthickness_orig.164k_fs_LR.surf.gii
   caret_command -file-convert -sc -is CARET $CoordInput $TopoInput -os
 GS $SurfaceInput
   
 CurrentSphere=$ExtractDir/spheres/standard.$Hemisphere.sphere.164k_fs_LR.surf.gii
   NewSphere=$ExtractDir/spheres/standard.$Hemisphere.sphere.32k_fs_LR.surf.gii
   
 SurfaceOutput=$SubjectDir/$Subject/$Subject.$Hemisphere.midthickness_orig.32k_fs_LR.surf.gii
   wb_command -surface-resample \
$SurfaceInput \
$CurrentSphere \
$NewSphere \
$ResamplingMethod \
$SurfaceOutput
 done
 done
 
 Donna
 
 
 On Aug 13, 2014, at 10:42 AM, Tang, Yan yan.t...@ttu.edu wrote:
 
 You means I can finish this work by using the Connectome Workbench. So, the 
 first thing I need to do is to convert all files to Workbench format by 
 using wb_import. Is it true?
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Timothy Coalson 
 [tsc...@mst.edu]
 Sent: Tuesday, August 12, 2014 7:16 PM
 To: Caret, SureFit, and SuMS software users; Donna Dierker
 Subject: Re: [caret-users] projecting functional MRI to gii

Re: [caret-users] Niftii integers and PHT00 atlas acronyms

2014-08-21 Thread Donna Dierker
Try saving the paint volume as wunil ifh format.  Then read the resulting .ifh 
file (a text file).  The integer : label mappings are there, but there is a 
translation of two, if I recall correctly.

You could also convert the surface paint file to text and look just after the 
header; that mapping should require no offset, but I'm not 100% certain your 
surface and volume indices will match.  The volume ifh is safer, and the offset 
is constant across labels.  I think it is +2 -- not certain.


On Aug 21, 2014, at 6:47 AM, Goulas Alexandros (PSYCHOLOGY) 
alexandros.gou...@maastrichtuniversity.nl wrote:

 Dear all,
 
 
   I have exported the PHT00 atlas from Caret to niftii with the paint to 
 volume function. The niftii volume consists of unique integers in every voxel 
 denoting an area in PHT00. Can you please indicate how I can get the 
 correspondances between these integers and the acronyms of the PHT00 areas?
 
 many thanks in advance.
 
 
 Alex
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] projecting functional MRI to gii surfaces

2014-08-13 Thread Donna Dierker
There are these deform_map files in the output of freesurfer_to_fs_LR:

./SAIS_216_MR1/164k_fs_164k_fs_LR_to_initial_mesh.R.deform_map
./SAIS_216_MR1/initial_mesh_to_164k_fs_LR.R.deform_map
./SAIS_216_MR1/164k_fs_164k_fs_LR_to_initial_mesh.L.deform_map
./SAIS_216_MR1/initial_mesh_to_164k_fs_LR.L.deform_map

But I don't think they will work for this downsampling.  I think Tim's 
surface-resample approach is better for this context.  I will come up with what 
works on one of my existing freesurfer_to_fs_LR output directories and post it 
here when I have it working.


On Aug 12, 2014, at 7:16 PM, Timothy Coalson tsc...@mst.edu wrote:

 -spec-file-change-resolution will not get you to the fs_LR 32k atlas from 
 fs_LR 164k (but it may get you deceptively close, making it even more 
 treacherous).  Those messages aren't errors, and the reasons behind them are 
 better left alone, as this isn't the command you want.
 
 What you need to do is to use the fs_LR atlas files for resampling the 
 surface.  In caret5, this requires deformation map files, which we have 
 probably already made for going between fs_LR 32k and 164k (Donna, do you 
 know if these are available?), with the -deformation-map-apply command.  
 However, we now do this in Connectome Workbench using atlas spheres directly 
 with the -surface-resample command (the fs_LR 32k and 164k spheres are 
 aligned by definition, but going to or from other atlases will need a 
 cross-atlas registered sphere).
 
 Tim
 
 
 
 On Tue, Aug 12, 2014 at 4:33 PM, Tang, Yan yan.t...@ttu.edu wrote:
 Sorry, I meet another problem.  I used  the Freesurfer_to_fs_LR Pipeline to 
 get  164k fs_LR surface. I think the vertex too much. So I want to  
 down-sampled to a 32,492 vertex surface. I used the caret_command 
 -spec-file-change-resolution. 
 But, the error was  followed:
 Nonstandard resolution specified...
 Using closest divided icosahedron, with 32492 nodes.
 Can you explain it?
 if I change the number, I find only  a few files be created such as 
 def_sphere.coord, def_sphere.deform_map,study1.R.2k_fs_LR.topo.gii and 
 study1.R.mini+orig.2k_fs_LR.spc. Many files such as curvature.shape.gii, 
 inflated.coord.gii, midthickness.coord.gii, pial.coord.gii, 
 thickness.shape.gii and white.coord.gii all aren't changed. So, I do some 
 wrong steps. How should I do it?
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Timothy Coalson 
 [tsc...@mst.edu]
 Sent: Monday, August 11, 2014 3:54 PM
 
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 
 On Mon, Aug 11, 2014 at 2:14 PM, Tang, Yan yan.t...@ttu.edu wrote:
 Yes,I have a lot of volumes which need be projected to surface. I only know 
 how to use the 'map volume to surface '. I don't know how to use the command. 
 Could you give me an example?
 Can the file of *.coord.gii be thought as the coordinate-file-name file? 
 But I only found .coord file can be used in the menu of caret command 
 executor . How about topo?
 
 .coord.gii should work, if it doesn't rename or copy the file to end in just 
 .coord .  The topo is the same file you need to load to be able to view the 
 surface.
  
 another thing is how to set the input-metric-or-paint-file-name?
 
 From the pasted help:
 
 If the input metric or paint file name is not an empty string (), the 
 newly create metric or paint columns will be appended to the file and then 
 written with the output file name.
 
 In other words, if you don't want to append the columns to an existing metric 
 file, use  (a pair of double quotes) for the argument.
 
 Since you asked something related on the hcp_users list (wb_command volume to 
 surface mapping), I will recommend that you try using wb_command for this, as 
 caret5 is no longer under active development.  The main hurdle in moving to 
 Workbench is converting the separate coord/topo files into the combined 
 .surf.gii format (with caret_command -file-convert with the -sc option).
  
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [do...@brainvis.wustl.edu]
 Sent: Tuesday, August 05, 2014 9:14 AM
 To: Caret, SureFit, and SuMS software users
 Cc: Tang, Yiyuan
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 I'm not clear on what you mean by I want get fMRI time course for surface 
 vertices of every subject.
 
 If you just mean how do you scale up -- map that many volumes to all your 
 subjects -- then I recommend scripting it and using caret_command.  (Note 
 that Workbench, the software that is superseding Caret 5.*, has more robust 
 mapping features than caret_command, but I am going to provide the 
 caret_command usage, since this is the caret-users list.  There is also a 
 hcp-users list that covers workbench.)
 
 Here is the usage for the command that maps volumes onto

Re: [caret-users] projecting functional MRI to gii surfaces

2014-08-13 Thread Donna Dierker
Hi Yan,

I'm not sure about wb_import, but I know it won't downsample for you.  This 
will, I hope:

http://brainmap.wustl.edu/pub/donna/ATLASES/HUMAN/FSLR/downsample164_to_32k.zip
login pub
password download

There is a script in there you will need to tweak to put in your pathnames and 
subject list.  I tried it on some freesurfer_to_fs_LR output I had lying 
around, and it worked.  The zip file also contains the spheres you need:

ExtractDir=/home/donna/downsample164_to_32k
SubjectDir=/mnt/myelin/donna/SUBJ/fs_LR_output_directory
ResamplingMethod=BARYCENTRIC

for Subject in $SubjList
do
 for Hemisphere in L R
 do
   
CoordInput=$SubjectDir/$Subject/$Subject.$Hemisphere.midthickness_orig.164k_fs_LR.coord.gii
   TopoInput=$SubjectDir/$Subject/$Subject.$Hemisphere.164k_fs_LR.topo.gii
   
SurfaceInput=$SubjectDir/$Subject/$Subject.$Hemisphere.midthickness_orig.164k_fs_LR.surf.gii
   caret_command -file-convert -sc -is CARET $CoordInput $TopoInput -os
GS $SurfaceInput
   
CurrentSphere=$ExtractDir/spheres/standard.$Hemisphere.sphere.164k_fs_LR.surf.gii
   NewSphere=$ExtractDir/spheres/standard.$Hemisphere.sphere.32k_fs_LR.surf.gii
   
SurfaceOutput=$SubjectDir/$Subject/$Subject.$Hemisphere.midthickness_orig.32k_fs_LR.surf.gii
   wb_command -surface-resample \
$SurfaceInput \
$CurrentSphere \
$NewSphere \
$ResamplingMethod \
$SurfaceOutput
 done
done

Donna


On Aug 13, 2014, at 10:42 AM, Tang, Yan yan.t...@ttu.edu wrote:

 You means I can finish this work by using the Connectome Workbench. So, the 
 first thing I need to do is to convert all files to Workbench format by using 
 wb_import. Is it true?
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Timothy Coalson 
 [tsc...@mst.edu]
 Sent: Tuesday, August 12, 2014 7:16 PM
 To: Caret, SureFit, and SuMS software users; Donna Dierker
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 -spec-file-change-resolution will not get you to the fs_LR 32k atlas from 
 fs_LR 164k (but it may get you deceptively close, making it even more 
 treacherous).  Those messages aren't errors, and the reasons behind them are 
 better left alone, as this isn't the command you want.
 
 What you need to do is to use the fs_LR atlas files for resampling the 
 surface.  In caret5, this requires deformation map files, which we have 
 probably already made for going between fs_LR 32k and 164k (Donna, do you 
 know if these are available?), with the -deformation-map-apply command.  
 However, we now do this in Connectome Workbench using atlas spheres directly 
 with the -surface-resample command (the fs_LR 32k and 164k spheres are 
 aligned by definition, but going to or from other atlases will need a 
 cross-atlas registered sphere).
 
 Tim
 
 
 
 On Tue, Aug 12, 2014 at 4:33 PM, Tang, Yan yan.t...@ttu.edu wrote:
 Sorry, I meet another problem.  I used  the Freesurfer_to_fs_LR Pipeline to 
 get  164k fs_LR surface. I think the vertex too much. So I want to  
 down-sampled to a 32,492 vertex surface. I used the caret_command 
 -spec-file-change-resolution. 
 But, the error was  followed:
 Nonstandard resolution specified...
 Using closest divided icosahedron, with 32492 nodes.
 Can you explain it?
 if I change the number, I find only  a few files be created such as 
 def_sphere.coord, def_sphere.deform_map,study1.R.2k_fs_LR.topo.gii and 
 study1.R.mini+orig.2k_fs_LR.spc. Many files such as curvature.shape.gii, 
 inflated.coord.gii, midthickness.coord.gii, pial.coord.gii, 
 thickness.shape.gii and white.coord.gii all aren't changed. So, I do some 
 wrong steps. How should I do it?
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Timothy Coalson 
 [tsc...@mst.edu]
 Sent: Monday, August 11, 2014 3:54 PM
 
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 
 On Mon, Aug 11, 2014 at 2:14 PM, Tang, Yan yan.t...@ttu.edu wrote:
 Yes,I have a lot of volumes which need be projected to surface. I only know 
 how to use the 'map volume to surface '. I don't know how to use the command. 
 Could you give me an example?
 Can the file of *.coord.gii be thought as the coordinate-file-name file? 
 But I only found .coord file can be used in the menu of caret command 
 executor . How about topo?
 
 .coord.gii should work, if it doesn't rename or copy the file to end in just 
 .coord .  The topo is the same file you need to load to be able to view the 
 surface.
  
 another thing is how to set the input-metric-or-paint-file-name?
 
 From the pasted help:
 
 If the input metric or paint file name is not an empty string (), the 
 newly create metric or paint columns will be appended to the file and then 
 written with the output file name.
 
 In other words, if you don't want to append the columns to an existing metric 
 file, use  (a pair of double quotes) for the argument

Re: [caret-users] projecting functional MRI to gii surfaces

2014-08-06 Thread Donna Dierker
First, I want to point out that there is a CIFTI matlab toolkit, I think, but I 
know very little about it (or matlab in general).  What I do know:

* The freesurfer_to_fs_LR pipeline probably doesn't generate CIFTI output.  But 
the forthcoming Human Connectome Project (HCP) pipeline does.
* Questions about the CIFTI matlab toolkit might better be posed to the CIFTI 
forum on nitrc.org (http://www.nitrc.org/forum/?group_id=454).

Using the caret5/caret_command tools, it is possible to convert the 
metric/func.gii files to ASCII, e.g.:

caret_command -file-convert -format-convert ASCII my_fmri.metric

… or:

caret_command -file-convert -format-convert ASCII my_fmri.func.gii

As for the coordinates, since you have the individual midthickness surfaces on 
164k mesh, you can those for the unprojection command.  If you really want a 
grid, then it is a border file you will get out of caret:

http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/file_formats/file_formats.html#borderFile

Here is the Alex Cohen paper:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2705206/

This is not a trivial task. ;-)


On Aug 5, 2014, at 3:13 PM, Tang, Yan yan.t...@ttu.edu wrote:

 Maybe I do some wrong things.  I used  the Freesurfer_to_fs_LR Pipeline to 
 get  164k fs_LR surface. Then, I want to get the points in 164k fs_LR surface 
 and use these points as seed to analysis the resting state functional 
 connectivity. So I do project functional MRI to surface. But after that, I 
 don't know how to do it in next step. I get metric file, but these files 
 cannot be read in Matlab. And I also don't get the coordinates of the points 
 in 164k fs_LR surface.
 
 Just you mention the method of Alex Cohen. I am  beginner of Caret. Could you 
 tell me the method in detail?
 
 thank you
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [do...@brainvis.wustl.edu]
 Sent: Tuesday, August 05, 2014 10:26 AM
 To: Caret, SureFit, and SuMS software users
 Cc: Tang, Yiyuan
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 You can do that, but I am not used to seeing this done with the mean 
 midthickness surface.  Alex Cohen did something like this when he was using 
 resting state functional connectivity to find gradients in a subject's 
 cortical networks.  He used the PALS flat or spherical map to get a common 
 grid onto a standard mesh.  Once there, he projected that grid onto the 
 individual's midthickness surface on the standard mesh.  (Like the grid is 
 getting folded back up into the individual's anatomical pattern.)  Then he 
 unprojected the points to use as seeds for his analysis.
 
 Do you mind if I ask how the mean midthickness surface comes into play?
 
 Caret's Layers: Borders has options for making grids on the flat map.  People 
 make grids on the sphere in matlab.
 
 There are caret_command tools for unprojecting borders.
 
  caret_command -surface-border-unprojection
 
 Border files differ from border projection files in that they are points not 
 tied to a particular mesh.  The advantage is that you can open the same 
 border point on, say, a native and 164k mesh, and it will align with both, if 
 it aligns with one (and they are identical except for mesh).  The advantage 
 of borderproj files is that they open on multiple configurations - flat, 
 midthickness, inflated, etc. - but the price is that you're tied to a mesh.
 
 
 On Aug 5, 2014, at 10:06 AM, Tang, Yan yan.t...@ttu.edu wrote:
 
 Thank you for your help. Another problem is how to use the Caret software to 
 generate  regularly spaced Cartesian grids  on the flattened PALS-B12 
 average surface of the left and right hemispheres.  Can I use this 
 3-dimensional (3D) stereotactic coordinates from the PALS-B12 average 
 fiducial (midthickness) surface for each grid location to obtain the voxel 
 coordinates (3 × 3 × 3 mm resolution) containing that point in fMRI?
 
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [donna.dier...@sbcglobal.net]
 Sent: Friday, August 01, 2014 5:34 PM
 To: Caret, SureFit, and SuMS software users
 Cc: Tang, Yiyuan
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 Push Toolbar: D/C and make sure the primary overlay is Metric.
 
 Make sure the right column is selected.
 
 If that check out okay, then I would do:
 
 File: Open Data File: Volume Functional File
 Load the volume you just mapped
 Switch to volume view and select view All (as opposed to H (horizontal or 
 axial).
 Select D/C and on the page selection drop-down menu, scroll all the way to 
 the bottom
   something like volume surface outline
 Toggle on the fiducial surface used for the mapping, so that you can see how 
 the surface aligns with the volume.
 
 Sometimes there are header issues

Re: [caret-users] projecting functional MRI to gii surfaces

2014-08-05 Thread Donna Dierker
I'm not clear on what you mean by I want get fMRI time course for surface 
vertices of every subject.

If you just mean how do you scale up -- map that many volumes to all your 
subjects -- then I recommend scripting it and using caret_command.  (Note that 
Workbench, the software that is superseding Caret 5.*, has more robust mapping 
features than caret_command, but I am going to provide the caret_command usage, 
since this is the caret-users list.  There is also a hcp-users list that covers 
workbench.)

Here is the usage for the command that maps volumes onto surfaces:

  caret_command -volume-map-to-surface  
 coordinate-file-name
 topology-file-name
 input-metric-or-paint-file-name
 output-metric-or-paint-file-name
 algorithm
 input-volume-file-names
 [-av  average-voxel-neighbor-cube-size (mm)]
 [-bf  brain-fish-max-distance
   brain-fish-splat factor]
 [-g   gaussian-neighbor-cube-size (mm)
   sigma-norm
   sigma-tang
   norm below cutoff (mm)
   norm above cutoff (mm)
   tang-cutoff (mm)]
 [-mv  maximum-voxel-neighbor-cube-size (mm)]
 [-sv  strongest-voxel-neighbor-cube-size (mm)]
 
 Map volume(s) to a surface metric or paint file.
 
 For successful mapping, both the surface and the volume
 must be in the same stereotaxic space.
 
 algorithm is one of:
METRIC_AVERAGE_NODES
METRIC_AVERAGE_VOXEL
METRIC_ENCLOSING_VOXEL
METRIC_GAUSSIAN
METRIC_INTERPOLATED_VOXEL
METRIC_MAXIMUM_VOXEL
METRIC_MCW_BRAIN_FISH
METRIC_STRONGEST_VOXEL
PAINT_ENCLOSING_VOXEL
 
 If the input metric or paint file name is not an empty string
  (), the newly create metric or paint columns will be 
 appended to the file and then written with the output file 
 name.



On Aug 4, 2014, at 3:49 PM, Tang, Yan yan.t...@ttu.edu wrote:

 Thank you for your help.  Now, I meet another problem. I project all 
 functional MRI to 164k fs_LR surface. Every subject have 135 volumes. I want 
 get fMRI time course for surface vertices of every subject. How should I do?
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [donna.dier...@sbcglobal.net]
 Sent: Friday, August 01, 2014 5:34 PM
 To: Caret, SureFit, and SuMS software users
 Cc: Tang, Yiyuan
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 Push Toolbar: D/C and make sure the primary overlay is Metric.
 
 Make sure the right column is selected.
 
 If that check out okay, then I would do:
 
 File: Open Data File: Volume Functional File
 Load the volume you just mapped
 Switch to volume view and select view All (as opposed to H (horizontal or 
 axial).
 Select D/C and on the page selection drop-down menu, scroll all the way to 
 the bottom
something like volume surface outline
 Toggle on the fiducial surface used for the mapping, so that you can see how 
 the surface aligns with the volume.
 
 Sometimes there are header issues, and the origin is not set correctly, 
 resulting in faulty volume-surface alignment.
 
 
 On Aug 1, 2014, at 4:29 PM, Tang, Yan yan.t...@ttu.edu wrote:
 
 When I open the spec file and mapped the Metric, only the surface was 
 displayed. The result is in the attachment. How should I do?
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [donna.dier...@sbcglobal.net]
 Sent: Friday, August 01, 2014 4:02 PM
 To: Caret, SureFit, and SuMS software users
 Cc: Tang, Yiyuan
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 Hmmm.  Sounds like more than a header to me.
 
 When you open the spec file you selected when you mapped the data, and 
 select the output file that is 446kb, what happens?
 
 You must make sure you select Metric on the D/C: Overlay/Underlay Surface 
 menu (primary or secondary, typically).  Else it won't display.
 
 
 On Aug 1, 2014, at 2:47 PM, Tang, Yan yan.t...@ttu.edu wrote:
 
 I am sure that the file exist and the size of file is 446KB. Is It correct?
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [donna.dier...@sbcglobal.net]
 Sent: Friday, August 01, 2014 10:10 AM
 To: Caret, SureFit, and SuMS software users
 Cc: Tang, Yiyuan
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 Hi Yan,
 
 Could you use a terminal window or file manager to check whether the file 
 exists, and if so, what its size is.
 
 We have seen cases before where the file was just a header -- no data.  
 Inexplicably

[caret-users] Metric file must be provided for the metric mapping

2014-08-05 Thread Donna Dierker
Yesterday, I ran into two arcane issues worth passing on, one of which was 
covered in an earlier thread:

1.  First I tried mapping a paint volume onto an atlas surface, but it turned 
out the volume was scalar values -- not discrete integers as I'd expected for a 
label/ROI volume.  Caret generated paint files that were just headers -- no 
data, but didn't complain.

2.  After realizing these were scalars, I mapped as functional, but the mapping 
algorithm stayed on the paint-related algorithm, as Tony explains here:

http://www.mail-archive.com/caret-users%40brainvis.wustl.edu/msg02887.html

When I tried to map, I got the Metric file must be provided for the metric 
mapping error.  Switching to a metric algorithm resolved the error.

No action required; this is in case someone runs into the same issue and 
searches caret-users for answers.
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] projecting functional MRI to gii surfaces

2014-08-01 Thread Donna Dierker
Hi Yan,

Could you use a terminal window or file manager to check whether the file 
exists, and if so, what its size is.

We have seen cases before where the file was just a header -- no data.  
Inexplicably, the presence of a non-english character set on the system used 
has caused this sort of trouble.  If there is a system nearby that does not 
have a non-english character set installed, you might see if Caret works there. 
 Or remove any non-english character sets on your system and see if it helps.

Donna


On Jul 31, 2014, at 3:52 PM, Tang, Yan yan.t...@ttu.edu wrote:

 Dear all,
   
 I used  the Freesurfer_to_fs_LR Pipeline to get  164k fs_LR surface. Now I 
 want to map functional volumes to surfaces.
 In volume selection page, I choose my file 'ff001_010.nii'; In spec file and 
 surface selection page, I choose the file 'study1.L.orig.164k_fs_LR'. I get a 
 file 'map_data_0_31_Jul_2014_15_10_13.metric'. But I find I couldn't open 
 this file. Which step is wrong? How can you do it?
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] projecting functional MRI to gii surfaces

2014-08-01 Thread Donna Dierker
Hmmm.  Sounds like more than a header to me.

When you open the spec file you selected when you mapped the data, and select 
the output file that is 446kb, what happens?

You must make sure you select Metric on the D/C: Overlay/Underlay Surface menu 
(primary or secondary, typically).  Else it won't display.


On Aug 1, 2014, at 2:47 PM, Tang, Yan yan.t...@ttu.edu wrote:

 I am sure that the file exist and the size of file is 446KB. Is It correct?
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [donna.dier...@sbcglobal.net]
 Sent: Friday, August 01, 2014 10:10 AM
 To: Caret, SureFit, and SuMS software users
 Cc: Tang, Yiyuan
 Subject: Re: [caret-users] projecting functional MRI to gii surfaces
 
 Hi Yan,
 
 Could you use a terminal window or file manager to check whether the file 
 exists, and if so, what its size is.
 
 We have seen cases before where the file was just a header -- no data.  
 Inexplicably, the presence of a non-english character set on the system used 
 has caused this sort of trouble.  If there is a system nearby that does not 
 have a non-english character set installed, you might see if Caret works 
 there.  Or remove any non-english character sets on your system and see if it 
 helps.
 
 Donna
 
 
 On Jul 31, 2014, at 3:52 PM, Tang, Yan yan.t...@ttu.edu wrote:
 
 Dear all,
 
 I used  the Freesurfer_to_fs_LR Pipeline to get  164k fs_LR surface. Now I 
 want to map functional volumes to surfaces.
 In volume selection page, I choose my file 'ff001_010.nii'; In spec file and 
 surface selection page, I choose the file 'study1.L.orig.164k_fs_LR'. I get 
 a file 'map_data_0_31_Jul_2014_15_10_13.metric'. But I find I couldn't open 
 this file. Which step is wrong? How can you do it?
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] How to load my own file by caret?

2014-07-22 Thread Donna Dierker
Do you mind my asking why you want to view your t-test result on F99, when you 
have it registered to 112RM-SL?  Why not use the 112RM-SL mean underlay?

Is this just for visualization, or is there some analysis you want to do?

It sounds like you need a volumetric registration between the 112RM-SL 
anatomical volume and the F99 volume.  It is possible that Donald McLaren has 
already computed one.

The F99 volume in the old 2006 tutorial was in AFNI format.  If that is the 
volume you want, I converted it to NIFTI:

http://brainmap.wustl.edu/pub/donna/ATLASES/MONKEY/Macaque.F99UA1.LR.03-11.nii
login pub
password download

If you mean a different F99 volume, then point me to your source.

If you do end up computing a volumetric registration between these atlases, 
then make sure your versions match in terms of what is or is not stripped 
(e.g., neither has had eyes, hindbrain, or other structures removed).

Again, I would not be surprised if Donald McLaren has done this already.


On Jul 21, 2014, at 8:58 PM, tangzhenc...@fingerpass.net.cn wrote:

 Hi, Donna!
 Many thanks for your explanation. 
 To be correct, the 'individual volume space' shall be 'group volume 
 space', I don't known if the expression is appropriate. In tbss, I register 
 FA of each monkey to the best target (selected by tbss_2_reg -n).
 The format of my 2 sample T test result is *.nii.gz. With Flirt of 
 FSL, I have ever registered the result to 112RM-SL, which is also of the 
 format of *.nii.gz. 
 Now, I want to register the result to F99 volume. But caret seems to 
 have a very different file format, I don't know which fle set as reference in 
 registration.
 
 
 ChrisTang
 tangzhenc...@fingerpass.net.cn
  
  Donna Dierker
 ?? 2014-07-21 23:34
  Caret, SureFit, and SuMS software users
 ?? Re: [caret-users] How to load my own file by caret?
 Hi Chris,
  
 See inline replies below.
  
 Donna
  
  
 On Jul 19, 2014, at 10:18 PM, ?? 1039537...@qq.com wrote:
  
  Hi, everyone!
 
  I process my monkey dti data on fsl, and have got 2 sample T 
  test result on individual volume space.
  
 You say individual volume space.  Does this mean the data is for a single 
 monkey?  I guess I'm used to thinking two sample t-test is for two groups, 
 but I guess not necessarily.
  
  I want to load my own result file as an overlay by caret and 
  set volume_F99 as underlay, so that I can estimate the brain areal of 
  significant.
  
 You must use something like FSL's flirt or fnirt to register your 
 individual's anatomical scan (T1 or T2) to the F99 anatomical scan.  I assume 
 your two sample t-test results are already in the same space as the T1 or T2. 
  If so, then apply that warp to your two sample t-test results.
  
 Doing this will let you view your test results overlaid on F99, but it won't 
 say anything about significance.  FSL's randomise should have said something 
 about this when you ran the two sample t-test.  You can view these results in 
 fslview, too.
  
  I know how to load a file and set it as an overlay, but I don't 
  know how to register it to F99 before that.
  
 You need a tool that does volumetric registration (e.g., FSL's flirt or 
 fnirt).  There are many others out there as well.
  
  How can I realize that?
 
 
  With thanks, ChrisTang
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
  
  
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
  
 --
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.
  
 
 -- 
 This message has been scanned for viruses and 
 dangerous content by MailScanner, and is 
 believed to be clean. ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Issue with displaying voxel intensities

2014-06-19 Thread Donna Dierker
Hi Rikkert,

Please try again.  There was a problem with permissions when I tried just now, 
but I fixed it.

Donnba


On Jun 19, 2014, at 1:12 AM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
wrote:

 Hi Donna,
 
 I tried to upload the file but am not sure if it worked though.
 Please let me know...
 
 Kind regards,
 Rikkert
 
 
 On Wed, Jun 18, 2014 at 8:30 PM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Hi Rickert,
 
 Assuming you used File: Open Data File: type = Volume Anatomy File, you 
 should see a normal T1w image.
 
 If you see solid white, then please upload the volume here, so we can try to 
 replicate the problem:
 
 http://pulvinar.wustl.edu/cgi-bin/upload.cgi
 
 Donna
 
 
 On Jun 18, 2014, at 12:32 PM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
 wrote:
 
 
  Dear all,
 
  When I load a T1-weighted scan into Caret, the image looks like a mask, 
  that is, all white voxels covering the brain. The voxel intensities 
  themselves are, however, nicely distributed and the grey and white matter 
  peaks are clearly visible. Also, the intensities are normalized between 0 
  and 255. This problem does not occur when I view the scan with MRIcron or 
  SPM. Does anyone know what is going on here?
 
  Thanks alot,
  Rikkert
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] MGH (label file) to caret

2014-06-19 Thread Donna Dierker
Hi Alexander,

I know mris_convert can convert Freesurfer surfaces to GIFTI gii format; I 
think it can do the same with surfaces.

The nice thing about GIFTI is that both caret5 and workbench (wb_view) can read 
GIFTI files.

Donna


On Jun 19, 2014, at 11:26 AM, Alexander Walther awaltherm...@gmail.com wrote:

 dear caret users,
 
 i have an MGH file with labels delineating and ROI in freesurfer. i now 
 simply would like to overlay this ROI in caret. is there an easy way of 
 converting freesurfer labels to caret?
 
 thanks.
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Issue with displaying voxel intensities

2014-06-18 Thread Donna Dierker
Hi Rickert,

Assuming you used File: Open Data File: type = Volume Anatomy File, you should 
see a normal T1w image.

If you see solid white, then please upload the volume here, so we can try to 
replicate the problem:

http://pulvinar.wustl.edu/cgi-bin/upload.cgi

Donna


On Jun 18, 2014, at 12:32 PM, HINDRIKS, RIKKERT rikkert.hindr...@upf.edu 
wrote:

 
 Dear all,
 
 When I load a T1-weighted scan into Caret, the image looks like a mask, that 
 is, all white voxels covering the brain. The voxel intensities themselves 
 are, however, nicely distributed and the grey and white matter peaks are 
 clearly visible. Also, the intensities are normalized between 0 and 255. This 
 problem does not occur when I view the scan with MRIcron or SPM. Does anyone 
 know what is going on here?
 
 Thanks alot,
 Rikkert 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] caret -smooth gives NaNs or values close to zero

2014-05-31 Thread Donna Dierker
Are you sure the coord, topo, and metric files all match (i.e., not only that 
they have the same number of vertices, but also that those vertices are in 
register with one another across data types)?


On May 31, 2014, at 9:28 AM, Alexander Walther awaltherm...@gmail.com wrote:

 hi caret users,
 
 i've just recently installed caret on our linux system and struggle with the 
 smoothing function. as part of my analysis pipeline i want to use caret to 
 smooth a surface map of correlation values. the values look fine before 
 smoothing but after having been passed through caret_command -metric 
 -smoothing (algorithm AN, 25 iterations), the map contains mostly NaNs, infs, 
 or zero values (inspected in MatLab) and has lost its topology. has anyone 
 encountered a similar problem with caret?
 
 advice is very appreciated.
 
 cheers
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] displaying brodmann areas

2014-05-20 Thread Donna Dierker
Hi Gabriel,

More detail on how you created surfaces from my own template might help here, 
but let me extrapolate and provide information that might be relevant.

I assume you want the Brodmann parcellation in the form of surface-based 
paint/label or border form (i.e., the kind of thing shown in the Caret 
tutorials).  These are available on our atlases, e.g., the PALS-B12 and 
Conte69/fs_LR atlases.  If you have your own surface-based atlas, and you want 
to bring the paint/borders David van Essen generated to that atlas, then you 
will need to register your atlas to one of ours.  If you have a mean 
midthickness surface for your atlas, then you can use it, along with less 
folded configurations (e.g., inflated, ellipsoid, spherical) to draw 
registration borders and use surface-based registration to get the two atlases 
in as close correspondence as possible.  If you haven't done it before, it's 
not the easiest thing, but it's not as bad as, say, segmenting post-mortem 
brains.

There are other anatomical atlases out there in volumetric form (e.g., 
automated anatomical labeling / AAL ; the Harvard-Oxford cortical/subcortical 
atlases (http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases).  Depending on how hard 
it is to get your surface in the same stereotaxic space as these atlases, they 
may or may not be helpful.

If you decide to go the surface-based registration route, then you will need a 
spherical configuration of your atlas.  I don't know of a more current tutorial 
than this one:

http://brainvis.wustl.edu/wiki_linked_files/documentation/Caret_5.5_Tutorial_Segment.pdf

I hope someone will speak up if he/she knows of a more current one.

Donna


On May 20, 2014, at 5:19 AM, Gabriel Gonzalez Escamilla ggon...@upo.es wrote:

 Dear caret experts, 
 
 I'm using caret v5.62. I have created surfaces from my own template, and 
 everything worked fine, I can map my results onto the surfaces without a 
 problem by using the Map Volume(s) to Surface(s) tool. 
 
 Now I would like to visualize my clusters (metric) over the brodmann areas in 
 my template surfaces, but I don't know how, or find any manual to do so. Do I 
 need to downlowad the BA_atlas from somewhere or is it included in caret and 
 only need to make some king of mapping to my template surfaces?
 
 Can you guide me on ti? Any help will be so appreciated.
 
 Many thanks in advanced,
 Gabriel ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] mapping ALE on surface: error for number of nodes

2014-05-08 Thread Donna Dierker
Could you zip up the Output spec file and its contents and upload the zip file 
here:

http://pulvinar.wustl.edu/cgi-bin/upload.cgi

Also, do you already have a spec file loaded when you select Attributes: Map 
volume to surface?

If so, try this without loading a spec file first.


On May 7, 2014, at 2:09 PM, Audrey-Anne Dubé audreyanned...@gmail.com wrote:

 I understand that the map I see is in fact the myelin smooting something.
 
 So, ine the spec file, when I go to metric  open my metric file, I have a 
 Choose Column to load window but no column has been loaded, and when I click 
 ok, I receive the same error messag: 
 
 Error PDHCEM_Conte69_LEFT_midthickness_164k_fs_LR_7_May_2014_14_58_48.metric: 
 contains a different number of nodes than 
 Conte69.L.midthickness.164k_fs_LR.coord.gii
 
 
 
 
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 
 
 On Wed, May 7, 2014 at 2:52 PM, Audrey-Anne Dubé audreyanned...@gmail.com 
 wrote:
 Ok I see an overlay! I didn't know I had to use Scene.
 So that's good! But the problem now is that I the overlay is not related to 
 my statistical data, as you can see with the screen shot of my file in Mango 
 vs in Caret.
 
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 
 
 On Wed, May 7, 2014 at 2:20 PM, Audrey-Anne Dubé audreyanned...@gmail.com 
 wrote:
 Hi,
 
 My volume is in Tal space. Is this a problem?
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 
 
 On Wed, May 7, 2014 at 11:30 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Another thought occurred to me:  If you select a 32k version of the Conte69 
 atlas as your spec file, then you'll get an error when you try to open the 
 164k vertex metric file.  I'm sure there is a way to downsample the metric 
 file, and there are also mapping features in Workbench's wb_command, if you 
 want to map directly to 32k mesh surfaces.  But we don't have a 32k version 
 of Conte69 in caret5.
 
 
 On May 7, 2014, at 9:58 AM, Donna Dierker do...@brainvis.wustl.edu wrote:
 
  Hi Audrey-Anne,
 
  Erin and I tried to do that, but you can't see the menu picks very clearly 
  in the video:
 
  https://www.youtube.com/watch?v=jED8sg9szdU
 
  But here are the steps:
 
  Download conte69 atlas:
 
  http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=8293720archive_name=Conte69_atlas_v2.LR.164k_fs_LR.c5.zip
 
  Unpack and cd to the Conte69_164k_fs_LR.c5 directory.
 
  Launch caret5.
 
  Cancel when spec file dialog comes up.
 
  Attributes: Map volume to surface
 
  Data mapping type: Metric; Next
 
  Add volumes from disk; select volume in MNI space; next
 
  Map to spec file with atlas
Note: If you don't see this, then your caret installation is 
  incomplete
  or confused about its parent directory.  Launch from command line.
 
  Output spec file: select Conte69_atlas-v2.L.164k_fs_LR.c5.spec
  Space: FNIRT
  Atlas: Conte69 Map LEFT midthickness...
 
  Next
 
  Accept default metric filename
 
  Mapping Algorithm:  Enclosing and interpolated are the most popular
 
  Next
 
  Close when Summary appears
 
  File: Open Spec File: Conte69_atlas-v2.L.164k_fs_LR.c5.spec
 
  Load scenes
 
  Double-click first scene (Conte69 midthickness and inflated...)
 
  Toolbar: Spec: Metric: map_data_0_Conte69...
Erase all existing columns
 
  Toolbar: D/C: Page Selection: Overlay/underlay Surface
Primary overlay
Data type metric metric
 
  You should see it.
 
  Donna
 
 
  On May 6, 2014, at 7:36 PM, Audrey-Anne Dubé audreyanned...@gmail.com 
  wrote:
 
  Hi Donna,
 
  As you suggested, I tried with the Conte69 atlas (164k and 74k), but same 
  results. The metric file is recorded in the spec file, but the same error 
  shows about different  number of nodes.
 
  Thanks for paying attention to my problem.
 
  Would it be possible for you to make an online demonstration, with a 
  screen sharing or a screen cast tool? In this way, we may find a way to 
  solve the issue faster.
 
 
 
 
  Audrey-Anne Dubé
  Candidate au PhD R/I neuropsychologie
  Université de Montréal
 
 
  On Tue, May 6, 2014 at 9:50 AM, Donna Dierker do...@brainvis.wustl.edu 
  wrote:
  Audrey,
 
  I will look at this more closely later, but this is the step where you 
  should be downloading the Conte69 atlas and using one of its visualization 
  specs instead of this spec out of the distribution directory:
 
  8. select in  fmri_mapping_files  
  Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec   as 
  you suggested
 
  The Conte69 atlas is here:
 
  http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas
 
  I wonder if your writing to the fmri_mapping_files that are intended to be 
  read-only might cause trouble.
 
  Also, I would not expect to find a SPHERE or CMW configuration in that 
  fmri_mapping_files spec.
 
  Will look more at this later, but here are some hints as to what looks 
  amiss

Re: [caret-users] mapping ALE on surface: error for number of nodes

2014-05-07 Thread Donna Dierker
Hi Audrey-Anne,

Erin and I tried to do that, but you can't see the menu picks very clearly in 
the video:

https://www.youtube.com/watch?v=jED8sg9szdU

But here are the steps:

Download conte69 atlas:

http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=8293720archive_name=Conte69_atlas_v2.LR.164k_fs_LR.c5.zip

Unpack and cd to the Conte69_164k_fs_LR.c5 directory.

Launch caret5.

Cancel when spec file dialog comes up.

Attributes: Map volume to surface

Data mapping type: Metric; Next

Add volumes from disk; select volume in MNI space; next

Map to spec file with atlas
Note: If you don't see this, then your caret installation is incomplete
or confused about its parent directory.  Launch from command line.

Output spec file: select Conte69_atlas-v2.L.164k_fs_LR.c5.spec
Space: FNIRT
Atlas: Conte69 Map LEFT midthickness...

Next

Accept default metric filename

Mapping Algorithm:  Enclosing and interpolated are the most popular

Next

Close when Summary appears

File: Open Spec File: Conte69_atlas-v2.L.164k_fs_LR.c5.spec

Load scenes

Double-click first scene (Conte69 midthickness and inflated...)

Toolbar: Spec: Metric: map_data_0_Conte69...
Erase all existing columns

Toolbar: D/C: Page Selection: Overlay/underlay Surface
Primary overlay
Data type metric metric

You should see it.

Donna


On May 6, 2014, at 7:36 PM, Audrey-Anne Dubé audreyanned...@gmail.com wrote:

 Hi Donna,
 
 As you suggested, I tried with the Conte69 atlas (164k and 74k), but same 
 results. The metric file is recorded in the spec file, but the same error 
 shows about different  number of nodes.
 
 Thanks for paying attention to my problem.
 
 Would it be possible for you to make an online demonstration, with a screen 
 sharing or a screen cast tool? In this way, we may find a way to solve the 
 issue faster.
 
 
 
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 
 
 On Tue, May 6, 2014 at 9:50 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Audrey,
 
 I will look at this more closely later, but this is the step where you should 
 be downloading the Conte69 atlas and using one of its visualization specs 
 instead of this spec out of the distribution directory:
 
  8. select in  fmri_mapping_files  
  Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec   as you 
  suggested
 
 The Conte69 atlas is here:
 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas
 
 I wonder if your writing to the fmri_mapping_files that are intended to be 
 read-only might cause trouble.
 
 Also, I would not expect to find a SPHERE or CMW configuration in that 
 fmri_mapping_files spec.
 
 Will look more at this later, but here are some hints as to what looks amiss.
 
 Donna
 
 
 On May 5, 2014, at 10:28 AM, Audrey-Anne Dubé audreyanned...@gmail.com 
 wrote:
 
  Here the attached image
 
  Audrey-Anne Dubé
  Candidate au PhD R/I neuropsychologie
  Université de Montréal
 
 
  On Mon, May 5, 2014 at 11:23 AM, Audrey-Anne Dubé 
  audreyanned...@gmail.com wrote:
  Dear Donna,
 
  It is not a read-only issue since the spec files are modified when I do the 
  mapping. Here is exactly what I did:
 
  1. open CARET v5.65
  2. Attributes  Map volume(s) to surface(s)
  3. chose Metric for Data mapping type
  4. Enable entry of volume threshold
  5. Add volumes from disk -- selected my ALE map, which is a nifti image, 
  in Talairach format
  6. Entered the volume thresholding in the pop-up window (positive: 0.028; 
  negative: 0.00)
  7. Map to spec file with Atlas
  8. select in  fmri_mapping_files  
  Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec   as you 
  suggested
  9. Space : FNIRT; Atlas: Conte69 Map LEFT midthickness (164k_LR nodes)
  10. idem 8 and 9, but with RIGHT
  11. renamed by data files (L and R)
  12. Mapping algorith: METRIC_ENCLOSING_VOXEL
  13. ok (or next), heard a bip like there was an error, but the summary 
  shows
  14. Close
  15. Open spec file  
  Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec
  16. check my metric file (please see joint image)  load
  17. Got this error message
 
  Error 
  PDHCEM_Conte69_LEFT_midthickness_164k_fs_LR_5_May_2014_11_13_13.metric: 
  Error 
  PDHCEM_Conte69_LEFT_midthickness_164k_fs_LR_5_May_2014_11_13_13.metric: 
  contains a different number of nodes than 
  Conte69.L.midthickness.164k_fs_LR.coord.gii
 
  Pressed ok
 
  18. The brain image is displayed, but without my data on it
 
  Can you see if I did something wrong?
 
  Thanks very much
 
 
  Audrey-Anne Dubé
  Candidate au PhD R/I neuropsychologie
  Université de Montréal
 
 
  On Wed, Apr 30, 2014 at 3:22 PM, Donna Dierker do...@brainvis.wustl.edu 
  wrote:
  Hi Audrey,
 
  Hmmm.  I wonder if this might be because the spec file you name below is 
  part of the Caret distribution, and it might be read-only.  In fact, it's a 
  good idea for it to be read-only.  The files under data_files

Re: [caret-users] mapping ALE on surface: error for number of nodes

2014-05-07 Thread Donna Dierker
Another thought occurred to me:  If you select a 32k version of the Conte69 
atlas as your spec file, then you'll get an error when you try to open the 164k 
vertex metric file.  I'm sure there is a way to downsample the metric file, and 
there are also mapping features in Workbench's wb_command, if you want to map 
directly to 32k mesh surfaces.  But we don't have a 32k version of Conte69 in 
caret5.


On May 7, 2014, at 9:58 AM, Donna Dierker do...@brainvis.wustl.edu wrote:

 Hi Audrey-Anne,
 
 Erin and I tried to do that, but you can't see the menu picks very clearly in 
 the video:
 
 https://www.youtube.com/watch?v=jED8sg9szdU
 
 But here are the steps:
 
 Download conte69 atlas:
 
 http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=8293720archive_name=Conte69_atlas_v2.LR.164k_fs_LR.c5.zip
 
 Unpack and cd to the Conte69_164k_fs_LR.c5 directory.
 
 Launch caret5.
 
 Cancel when spec file dialog comes up.
 
 Attributes: Map volume to surface
 
 Data mapping type: Metric; Next
 
 Add volumes from disk; select volume in MNI space; next
 
 Map to spec file with atlas
   Note: If you don't see this, then your caret installation is incomplete
 or confused about its parent directory.  Launch from command line.
 
 Output spec file: select Conte69_atlas-v2.L.164k_fs_LR.c5.spec
 Space: FNIRT
 Atlas: Conte69 Map LEFT midthickness...
 
 Next
 
 Accept default metric filename
 
 Mapping Algorithm:  Enclosing and interpolated are the most popular
 
 Next
 
 Close when Summary appears
 
 File: Open Spec File: Conte69_atlas-v2.L.164k_fs_LR.c5.spec
 
 Load scenes
 
 Double-click first scene (Conte69 midthickness and inflated...)
 
 Toolbar: Spec: Metric: map_data_0_Conte69...
   Erase all existing columns
 
 Toolbar: D/C: Page Selection: Overlay/underlay Surface
   Primary overlay
   Data type metric metric
 
 You should see it.
 
 Donna
 
 
 On May 6, 2014, at 7:36 PM, Audrey-Anne Dubé audreyanned...@gmail.com wrote:
 
 Hi Donna,
 
 As you suggested, I tried with the Conte69 atlas (164k and 74k), but same 
 results. The metric file is recorded in the spec file, but the same error 
 shows about different  number of nodes.
 
 Thanks for paying attention to my problem.
 
 Would it be possible for you to make an online demonstration, with a screen 
 sharing or a screen cast tool? In this way, we may find a way to solve the 
 issue faster.
 
 
 
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 
 
 On Tue, May 6, 2014 at 9:50 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Audrey,
 
 I will look at this more closely later, but this is the step where you 
 should be downloading the Conte69 atlas and using one of its visualization 
 specs instead of this spec out of the distribution directory:
 
 8. select in  fmri_mapping_files  
 Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec   as you 
 suggested
 
 The Conte69 atlas is here:
 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas
 
 I wonder if your writing to the fmri_mapping_files that are intended to be 
 read-only might cause trouble.
 
 Also, I would not expect to find a SPHERE or CMW configuration in that 
 fmri_mapping_files spec.
 
 Will look more at this later, but here are some hints as to what looks amiss.
 
 Donna
 
 
 On May 5, 2014, at 10:28 AM, Audrey-Anne Dubé audreyanned...@gmail.com 
 wrote:
 
 Here the attached image
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 
 
 On Mon, May 5, 2014 at 11:23 AM, Audrey-Anne Dubé 
 audreyanned...@gmail.com wrote:
 Dear Donna,
 
 It is not a read-only issue since the spec files are modified when I do the 
 mapping. Here is exactly what I did:
 
 1. open CARET v5.65
 2. Attributes  Map volume(s) to surface(s)
 3. chose Metric for Data mapping type
 4. Enable entry of volume threshold
 5. Add volumes from disk -- selected my ALE map, which is a nifti image, 
 in Talairach format
 6. Entered the volume thresholding in the pop-up window (positive: 0.028; 
 negative: 0.00)
 7. Map to spec file with Atlas
 8. select in  fmri_mapping_files  
 Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec   as you 
 suggested
 9. Space : FNIRT; Atlas: Conte69 Map LEFT midthickness (164k_LR nodes)
 10. idem 8 and 9, but with RIGHT
 11. renamed by data files (L and R)
 12. Mapping algorith: METRIC_ENCLOSING_VOXEL
 13. ok (or next), heard a bip like there was an error, but the summary 
 shows
 14. Close
 15. Open spec file  
 Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec
 16. check my metric file (please see joint image)  load
 17. Got this error message
 
 Error 
 PDHCEM_Conte69_LEFT_midthickness_164k_fs_LR_5_May_2014_11_13_13.metric: 
 Error 
 PDHCEM_Conte69_LEFT_midthickness_164k_fs_LR_5_May_2014_11_13_13.metric: 
 contains a different number of nodes than 
 Conte69.L.midthickness.164k_fs_LR.coord.gii
 
 Pressed ok
 
 18. The brain image is displayed, but without

Re: [caret-users] mapping ALE on surface: error for number of nodes

2014-05-07 Thread Donna Dierker
We don't have a version of the Conte69 midthickness in talairach space, but we 
do have a PALS version that is in TT space.  Use the AFNI PALS surface.

But this would not cause the mesh error (wrong number of vertices) issue that 
you were having.  If you tried to view a 74k PALS mesh metric on a Conte69 164k 
mesh surface, then you'd get a problem like the one you had.





 From: Audrey-Anne Dubé audreyanned...@gmail.com
To: Caret, SureFit, and SuMS software users caret-users@brainvis.wustl.edu 
Sent: Wednesday, May 7, 2014 1:20 PM
Subject: Re: [caret-users] mapping ALE on surface: error for number of nodes
 


Hi,

My volume is in Tal space. Is this a problem?



Audrey-Anne Dubé
Candidate au PhD R/I neuropsychologie
Université de Montréal



On Wed, May 7, 2014 at 11:30 AM, Donna Dierker do...@brainvis.wustl.edu wrote:

Another thought occurred to me:  If you select a 32k version of the Conte69 
atlas as your spec file, then you'll get an error when you try to open the 164k 
vertex metric file.  I'm sure there is a way to downsample the metric file, and 
there are also mapping features in Workbench's wb_command, if you want to map 
directly to 32k mesh surfaces.  But we don't have a 32k version of Conte69 in 
caret5.



On May 7, 2014, at 9:58 AM, Donna Dierker do...@brainvis.wustl.edu wrote:

 Hi Audrey-Anne,

 Erin and I tried to do that, but you can't see the menu picks very clearly 
 in the video:

 https://www.youtube.com/watch?v=jED8sg9szdU

 But here are the steps:

 Download conte69 atlas:

 http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=8293720archive_name=Conte69_atlas_v2.LR.164k_fs_LR.c5.zip

 Unpack and cd to the Conte69_164k_fs_LR.c5 directory.

 Launch caret5.

 Cancel when spec file dialog comes up.

 Attributes: Map volume to surface

 Data mapping type: Metric; Next

 Add volumes from disk; select volume in MNI space; next

 Map to spec file with atlas
       Note: If you don't see this, then your caret installation is incomplete
 or confused about its parent directory.  Launch from command line.

 Output spec file: select Conte69_atlas-v2.L.164k_fs_LR.c5.spec
 Space: FNIRT
 Atlas: Conte69 Map LEFT midthickness...

 Next

 Accept default metric filename

 Mapping Algorithm:  Enclosing and interpolated are the most popular

 Next

 Close when Summary appears

 File: Open Spec File: Conte69_atlas-v2.L.164k_fs_LR.c5.spec

 Load scenes

 Double-click first scene (Conte69 midthickness and inflated...)

 Toolbar: Spec: Metric: map_data_0_Conte69...
       Erase all existing columns

 Toolbar: D/C: Page Selection: Overlay/underlay Surface
       Primary overlay
               Data type metric metric

 You should see it.

 Donna


 On May 6, 2014, at 7:36 PM, Audrey-Anne Dubé audreyanned...@gmail.com 
 wrote:

 Hi Donna,

 As you suggested, I tried with the Conte69 atlas (164k and 74k), but same 
 results. The metric file is recorded in the spec file, but the same error 
 shows about different  number of nodes.

 Thanks for paying attention to my problem.

 Would it be possible for you to make an online demonstration, with a screen 
 sharing or a screen cast tool? In this way, we may find a way to solve the 
 issue faster.




 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal


 On Tue, May 6, 2014 at 9:50 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Audrey,

 I will look at this more closely later, but this is the step where you 
 should be downloading the Conte69 atlas and using one of its visualization 
 specs instead of this spec out of the distribution directory:

 8. select in  fmri_mapping_files  
 Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec   as you 
 suggested

 The Conte69 atlas is here:

 http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas

 I wonder if your writing to the fmri_mapping_files that are intended to be 
 read-only might cause trouble.

 Also, I would not expect to find a SPHERE or CMW configuration in that 
 fmri_mapping_files spec.

 Will look more at this later, but here are some hints as to what looks 
 amiss.

 Donna


 On May 5, 2014, at 10:28 AM, Audrey-Anne Dubé audreyanned...@gmail.com 
 wrote:

 Here the attached image

 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal


 On Mon, May 5, 2014 at 11:23 AM, Audrey-Anne Dubé 
 audreyanned...@gmail.com wrote:
 Dear Donna,

 It is not a read-only issue since the spec files are modified when I do 
 the mapping. Here is exactly what I did:

 1. open CARET v5.65
 2. Attributes  Map volume(s) to surface(s)
 3. chose Metric for Data mapping type
 4. Enable entry of volume threshold
 5. Add volumes from disk -- selected my ALE map, which is a nifti image, 
 in Talairach format
 6. Entered the volume thresholding in the pop-up window (positive: 0.028; 
 negative: 0.00)
 7. Map to spec file with Atlas
 8. select in  fmri_mapping_files  
 Human.Conte69.midthickness_FNIRT_fMRI

Re: [caret-users] mapping ALE on surface: error for number of nodes

2014-05-06 Thread Donna Dierker
Audrey,

I will look at this more closely later, but this is the step where you should 
be downloading the Conte69 atlas and using one of its visualization specs 
instead of this spec out of the distribution directory:

 8. select in  fmri_mapping_files  
 Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec   as you 
 suggested

The Conte69 atlas is here:

http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas

I wonder if your writing to the fmri_mapping_files that are intended to be 
read-only might cause trouble.

Also, I would not expect to find a SPHERE or CMW configuration in that 
fmri_mapping_files spec.

Will look more at this later, but here are some hints as to what looks amiss.

Donna


On May 5, 2014, at 10:28 AM, Audrey-Anne Dubé audreyanned...@gmail.com wrote:

 Here the attached image
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 
 
 On Mon, May 5, 2014 at 11:23 AM, Audrey-Anne Dubé audreyanned...@gmail.com 
 wrote:
 Dear Donna,
 
 It is not a read-only issue since the spec files are modified when I do the 
 mapping. Here is exactly what I did:
 
 1. open CARET v5.65
 2. Attributes  Map volume(s) to surface(s)
 3. chose Metric for Data mapping type
 4. Enable entry of volume threshold
 5. Add volumes from disk -- selected my ALE map, which is a nifti image, in 
 Talairach format
 6. Entered the volume thresholding in the pop-up window (positive: 0.028; 
 negative: 0.00)
 7. Map to spec file with Atlas
 8. select in  fmri_mapping_files  
 Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec   as you 
 suggested
 9. Space : FNIRT; Atlas: Conte69 Map LEFT midthickness (164k_LR nodes)
 10. idem 8 and 9, but with RIGHT
 11. renamed by data files (L and R)
 12. Mapping algorith: METRIC_ENCLOSING_VOXEL
 13. ok (or next), heard a bip like there was an error, but the summary shows
 14. Close
 15. Open spec file  
 Human.Conte69.midthickness_FNIRT_fMRI-MAPPER.LEFT.164k_fs_LR.spec
 16. check my metric file (please see joint image)  load
 17. Got this error message
 
 Error PDHCEM_Conte69_LEFT_midthickness_164k_fs_LR_5_May_2014_11_13_13.metric: 
 Error PDHCEM_Conte69_LEFT_midthickness_164k_fs_LR_5_May_2014_11_13_13.metric: 
 contains a different number of nodes than 
 Conte69.L.midthickness.164k_fs_LR.coord.gii
 
 Pressed ok
 
 18. The brain image is displayed, but without my data on it
 
 Can you see if I did something wrong?
 
 Thanks very much
 
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 
 
 On Wed, Apr 30, 2014 at 3:22 PM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Hi Audrey,
 
 Hmmm.  I wonder if this might be because the spec file you name below is part 
 of the Caret distribution, and it might be read-only.  In fact, it's a good 
 idea for it to be read-only.  The files under data_files are intended to be 
 used by Caret without risk of users writing their analysis files there.  Is 
 this spec file under $CARET_HOME/data_files or has it been copied from there? 
  And is the directory and spec file writeable?
 
 There are better choices for other visualization specs, e.g., like those in 
 this tutorial spec:
 
 http://brainmap.wustl.edu/pub/donna/ATLASES/HUMAN/CARET_TUTORIAL_SEPT06.zip
 login pub
 password download
 
 Also, depending on the constraints of your meta-analysis, you might consider 
 moving from the PALS atlas to the Conte69 atlas:
 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas
 
 
 Donna
 
 
 On Apr 30, 2014, at 1:19 PM, Audrey-Anne Dubé audreyanned...@gmail.com 
 wrote:
 
  Dear Caret users,
 
  I want to map my meta-analysis results on an PALS surface. So I tried to 
  map my ALE map (nifti format) to a surface atlas using this spec file
  Human.PALS_B12.LR.MULTI-FIDUCIAL_711-2C_fMRI-MAPPER.B1-12.RIGHT.align2.73730.spec
 
  I got this error message:
  Error PDHCEM_Mapped_to_PALS.RIGHT.73730.metric: Error 
  PDHCEM_Mapped_to_PALS.RIGHT.73730.metric: contains a different number of 
  nodes than 
  Human_Buck_Case12.R.M.RegToPALS_B12.LR.FIDUCIAL_711-2C.align2.73730.coord
 
  I cannot figure how to resolve this problem. Help would be greatly 
  appreciated!
 
  Thank!
 
  Audrey-Anne Dubé
  Candidate au PhD R/I neuropsychologie
  Université de Montréal
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 Screen Shot 2014-05-05 at 
 11.18.18.jpg___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret

Re: [caret-users] mapping ALE on surface: error for number of nodes

2014-04-30 Thread Donna Dierker
Hi Audrey,

Hmmm.  I wonder if this might be because the spec file you name below is part 
of the Caret distribution, and it might be read-only.  In fact, it's a good 
idea for it to be read-only.  The files under data_files are intended to be 
used by Caret without risk of users writing their analysis files there.  Is 
this spec file under $CARET_HOME/data_files or has it been copied from there?  
And is the directory and spec file writeable?

There are better choices for other visualization specs, e.g., like those in 
this tutorial spec:

http://brainmap.wustl.edu/pub/donna/ATLASES/HUMAN/CARET_TUTORIAL_SEPT06.zip
login pub
password download

Also, depending on the constraints of your meta-analysis, you might consider 
moving from the PALS atlas to the Conte69 atlas:

http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas


Donna


On Apr 30, 2014, at 1:19 PM, Audrey-Anne Dubé audreyanned...@gmail.com wrote:

 Dear Caret users,
 
 I want to map my meta-analysis results on an PALS surface. So I tried to map 
 my ALE map (nifti format) to a surface atlas using this spec file
 Human.PALS_B12.LR.MULTI-FIDUCIAL_711-2C_fMRI-MAPPER.B1-12.RIGHT.align2.73730.spec
 
 I got this error message:
 Error PDHCEM_Mapped_to_PALS.RIGHT.73730.metric: Error 
 PDHCEM_Mapped_to_PALS.RIGHT.73730.metric: contains a different number of 
 nodes than 
 Human_Buck_Case12.R.M.RegToPALS_B12.LR.FIDUCIAL_711-2C.align2.73730.coord
 
 I cannot figure how to resolve this problem. Help would be greatly 
 appreciated!
 
 Thank!
 
 Audrey-Anne Dubé
 Candidate au PhD R/I neuropsychologie
 Université de Montréal
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] how to specify palette using caret_command?

2014-04-08 Thread Donna Dierker
Hi Daniel,

The palette isn't used in the mapping step (caret_command 
-volume-map-to-surface or  -volume-map-to-surface-pals).  Rather, it is used 
when visualizing the results of mapping (output metric), either interactively 
in caret5 or using a ready-made scene or caret_command -show-scene.  The 
palette can be set on the Metric Settings menu.  There is a drop-down menu, or 
you can make your own palette 
(http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/file_formats/file_formats.html#paletteFile).

Donna

On Apr 8, 2014, at 11:12 AM, Yang, Daniel yung-jui.y...@yale.edu wrote:

 Hi Caret Experts and Users,
 
 I want to change the palette in mapping volume to surface pals. Do you know 
 how I can do that using caret_command?
 
 Thanks!
 Daniel
 
 -- 
 Daniel (Yung-Jui) Yang, Ph.D.
 Postdoctoral Researcher
 Yale Child Study Center
 New Haven, CT
 Tel: (203) 737-5454
 E-mail: yung-jui.y...@yale.edu
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] p-value color bar in the surface mapping?

2014-04-08 Thread Donna Dierker
Daniel,

The only ways I know to do are:

* If your volume is already p-values, map it, but then use Attributes: Metric 
to compute q=1-p and threshold q.
* If your volume is t, f, etc. then using some external software/table compute 
the t/f corresponding to p=.05, and threshold at that level.

Toggle on display color bar, and you will need to photoshop in p.05 in lieu of 
the q/t/f.

Donna


On Apr 8, 2014, at 2:28 PM, Yang, Daniel yung-jui.y...@yale.edu wrote:

 Hi Donna,
 
 Thank you so much! That helps a lot! By the way, do you know how I can add a 
 p-value bar in the surface mapping with metric?
 
 I want to show something like p  .05 ~ .0001 for example.
 
 Thanks!
 Daniel
 
 -- 
 Daniel (Yung-Jui) Yang, Ph.D.
 Postdoctoral Researcher
 Yale Child Study Center
 New Haven, CT
 Tel: (203) 737-5454
 E-mail: yung-jui.y...@yale.edu
 
 On 4/8/14 2:19 PM, Donna Dierker do...@brainvis.wustl.edu wrote:
 
 Hi Daniel,
 
 The palette isn't used in the mapping step (caret_command 
 -volume-map-to-surface or  -volume-map-to-surface-pals).  Rather, it is used 
 when visualizing the results of mapping (output metric), either interactively 
 in caret5 or using a ready-made scene or caret_command -show-scene.  The 
 palette can be set on the Metric Settings menu.  There is a drop-down menu, 
 or you can make your own palette 
 (http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/file_formats/file_formats.html#paletteFile).
 
 Donna
 
 On Apr 8, 2014, at 11:12 AM, Yang, Daniel yung-jui.y...@yale.edu wrote:
 
 Hi Caret Experts and Users,
 I want to change the palette in mapping volume to surface pals. Do you know 
 how I can do that using caret_command?
 Thanks!
 Daniel
 --
 Daniel (Yung-Jui) Yang, Ph.D.
 Postdoctoral Researcher
 Yale Child Study Center
 New Haven, CT
 Tel: (203) 737-5454
 E-mail: yung-jui.y...@yale.edu
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] mapping fMRI data to surface

2014-03-31 Thread Donna Dierker
I definitely would NOT write to the caret/data_files directory.  You can copy 
from it, but messing with those files is not a good idea, even if the 
permissions allow you to do so.  Caret uses files in that directory to do its 
job, so if you modify them, things will break.

Under Attributes: Metric there are many smoothing options, but caret_command 
has this feature:

  caret_command -metric-smoothing  
 coordinate-file-name
 topology-file-name
 input-metric-file-name
 output-metric-file-name
 smoothing-algorithm
 smoothing-number-of-iterations
 smoothing-strength
  
 [-geo-gauss sigma] 
  
 [-fwhm  desired-full-width-half-maximum] 
  
 [-gauss   spherical-coordinate-file-name
   sigma-norm
   sigma-tang
   norm below cutoff (mm)
   norm above cutoff (mm)
   tang-cutoff (mm)]
 
 [-parallel]
 
 Smooth metric data.
 
 smoothing-algorithm is one of:
AN  Average Neighbors
DILATE  Dilation
FWHMFull-Width Half-Maximum
GAUSS   Gaussian, requires -gauss
GEOGAUSS   Geodesic Gaussian, uses -geo-gauss, default 2.0
WAN Weighted Average Neighbors
 
NOTE: Geodesic Gaussian IGNORES the strength parameter,
   amount of smoothing is controlled solely by sigma and
   iterations.  The intent is to do one iteration of
   smoothing, with the sigma specifying how much smoother
   the metric is desired to be.

I suspect wb_command has even better smoothing (seem to recall Tim C saying 
something along those lines, but I can't recall the details).  This command 
line utility is part of Caret Workbench and needs a different format -- GIFTI 
or CIFTI.  You can convert caret5 metric to GIFTI by doing:

caret_command -file-convert -format-convert XML_BASE64_GZIP my_file.metric

At least I think that is what Tim C. told me. ;-)


On Mar 31, 2014, at 10:38 AM, Cheng, Hu huch...@indiana.edu wrote:

 Hi Donna,
 
 I loaded the atlas first, then I was able to see the mapping using Map to 
 Caret. In the software it shows the metric file is added to the spec file if 
 using Map to Spec File With Atlas, maybe it's forbidden to add file to 
 caret/data_files/standard_mesh_atlases/Human.PALS.LEFT?
 Now I have the metric file, could you tell me how to smooth it or is there 
 any way to convert it back to freesurfer which I know how to do smoothing?
 Many thanks!
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Monday, March 31, 2014 11:07 AM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] mapping fMRI data to surface
 
 Hi Hu,
 
 No, there's no log file, but if you check Debug Enabled (File: Preferences), 
 all sorts of stuff will scroll to the terminal window from which Caret was 
 launched.
 
 I assume you have made sure your working directory is writable by the user 
 running Caret.  File: Set Working Directory can show you what Caret thinks 
 the working directory is.
 
 Donna
 
 
 On Mar 31, 2014, at 9:54 AM, Cheng, Hu huch...@indiana.edu wrote:
 
 Hi Donna,
 
 Thank you! There is no non-English character set on the mac. I just wonder 
 if there is a log file so that I can see what's wrong.
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Friday, March 28, 2014 5:07 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] mapping fMRI data to surface
 
 Do you have a non-English character set installed on this computer?  We have 
 had issues with writing metric files when a non-English character set was 
 installed, but the error I recall was worded slightly different.
 
 
 On Mar 28, 2014, at 1:22 PM, Cheng, Hu wrote:
 
 Dear Caret users,
 
 I'm trying to map fMRI data to surface template and do surface based 
 smoothing. The fMRI data I used were already normalized in spm8. I ran 
 attribute Map Volumes(s) to Surface(s) accordingly to 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/MapVolumeToSurface,
  I selected Map to Spec File With Atlas using  
 Human.PALS_B12.B1-12.DEPTH_ANALYSES_LEFT.73730 SPEC file and chose SPM5 
 space. The program ran smoothly at the beginning but an error popped out in 
 the end showing :Unable to save metric file  This was done in Windows. 
 I repeated the same process on Mac, the program seemed to terminated 
 without any error except a sound, there is no metric file created in the 
 working directory.
 Thanks for your help.
 
 Regards,
 
 Hu
 
 
 
 
 ___
 caret

Re: [caret-users] mapping fMRI data to surface

2014-03-31 Thread Donna Dierker
It depends on how you map.  See page 25 of this tutorial:

http://brainvis.wustl.edu/wiki_linked_files/documentation/Caret_Tutorial_Sep22.pdf
In AFM, each node is assigned the value of the voxel in which it resides (or 
an interpolated value, depending on the specific algorithm chosen). In MFM, 
each node takes the average value after mapping the volume to each of the 12 
contributing hemispheres. MFM gives a smoother map and the best estimate of 
spatial localization; AFM gives the localization; AFM gives the most likely 
peak value.

While the normalization method does affect the smoothness, the AFM vs MFM 
choice swamps those differences.


On Mar 31, 2014, at 12:43 PM, Cheng, Hu huch...@indiana.edu wrote:

 Thank you! It worked very well.
 One more question, is there any process of smoothing in the mapping from 
 volume to surface? The resulted surface already has a FWHM of 14 mm (maybe 
 it's due to the normalization in SPM).
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Monday, March 31, 2014 12:18 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] mapping fMRI data to surface
 
 I definitely would NOT write to the caret/data_files directory.  You can copy 
 from it, but messing with those files is not a good idea, even if the 
 permissions allow you to do so.  Caret uses files in that directory to do its 
 job, so if you modify them, things will break.
 
 Under Attributes: Metric there are many smoothing options, but caret_command 
 has this feature:
 
  caret_command -metric-smoothing  
 coordinate-file-name
 topology-file-name
 input-metric-file-name
 output-metric-file-name
 smoothing-algorithm
 smoothing-number-of-iterations
 smoothing-strength
 
 [-geo-gauss sigma] 
 
 [-fwhm  desired-full-width-half-maximum] 
 
 [-gauss   spherical-coordinate-file-name
   sigma-norm
   sigma-tang
   norm below cutoff (mm)
   norm above cutoff (mm)
   tang-cutoff (mm)]
 
 [-parallel]
 
 Smooth metric data.
 
 smoothing-algorithm is one of:
AN  Average Neighbors
DILATE  Dilation
FWHMFull-Width Half-Maximum
GAUSS   Gaussian, requires -gauss
GEOGAUSS   Geodesic Gaussian, uses -geo-gauss, default 2.0
WAN Weighted Average Neighbors
 
NOTE: Geodesic Gaussian IGNORES the strength parameter,
   amount of smoothing is controlled solely by sigma and
   iterations.  The intent is to do one iteration of
   smoothing, with the sigma specifying how much smoother
   the metric is desired to be.
 
 I suspect wb_command has even better smoothing (seem to recall Tim C saying 
 something along those lines, but I can't recall the details).  This command 
 line utility is part of Caret Workbench and needs a different format -- GIFTI 
 or CIFTI.  You can convert caret5 metric to GIFTI by doing:
 
 caret_command -file-convert -format-convert XML_BASE64_GZIP my_file.metric
 
 At least I think that is what Tim C. told me. ;-)
 
 
 On Mar 31, 2014, at 10:38 AM, Cheng, Hu huch...@indiana.edu wrote:
 
 Hi Donna,
 
 I loaded the atlas first, then I was able to see the mapping using Map to 
 Caret. In the software it shows the metric file is added to the spec file 
 if using Map to Spec File With Atlas, maybe it's forbidden to add file to 
 caret/data_files/standard_mesh_atlases/Human.PALS.LEFT?
 Now I have the metric file, could you tell me how to smooth it or is there 
 any way to convert it back to freesurfer which I know how to do smoothing?
 Many thanks!
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Monday, March 31, 2014 11:07 AM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] mapping fMRI data to surface
 
 Hi Hu,
 
 No, there's no log file, but if you check Debug Enabled (File: Preferences), 
 all sorts of stuff will scroll to the terminal window from which Caret was 
 launched.
 
 I assume you have made sure your working directory is writable by the user 
 running Caret.  File: Set Working Directory can show you what Caret thinks 
 the working directory is.
 
 Donna
 
 
 On Mar 31, 2014, at 9:54 AM, Cheng, Hu huch...@indiana.edu wrote:
 
 Hi Donna,
 
 Thank you! There is no non-English character set on the mac. I just wonder 
 if there is a log file so that I can see what's wrong.
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Friday, March 28, 2014 5:07 PM

Re: [caret-users] mapping fMRI data to surface

2014-03-28 Thread Donna Dierker
Do you have a non-English character set installed on this computer?  We have 
had issues with writing metric files when a non-English character set was 
installed, but the error I recall was worded slightly different.


On Mar 28, 2014, at 1:22 PM, Cheng, Hu wrote:

 Dear Caret users,
 
 I'm trying to map fMRI data to surface template and do surface based 
 smoothing. The fMRI data I used were already normalized in spm8. I ran 
 attribute Map Volumes(s) to Surface(s) accordingly to 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/MapVolumeToSurface, 
 I selected Map to Spec File With Atlas using  
 Human.PALS_B12.B1-12.DEPTH_ANALYSES_LEFT.73730 SPEC file and chose SPM5 
 space. The program ran smoothly at the beginning but an error popped out in 
 the end showing :Unable to save metric file  This was done in Windows. I 
 repeated the same process on Mac, the program seemed to terminated without 
 any error except a sound, there is no metric file created in the working 
 directory.
 Thanks for your help.
 
 Regards,
 
 Hu
 
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] volume data corresponding to the MNI surface data

2014-03-21 Thread Donna Dierker
Hi Zhuangming,

This option can be handy, but in this context, it's important to keep your 
expectations modest, and be aware of the limitations, because the odds of the 
resulting ROI/label/paint volume aligning nicely with whatever you want it to 
(surface or other anatomical/functional volume) are not as high as I'd like 
them to be.  And there is a good chance there are surface-based methods to do 
your analysis that might be more accurate.

If you still want to project surface-to-volume, then download the template from 
MNI using the link below, and select mni_icbm152_t1_tal_nlin_sym_09a.nii using 
the feature that lets you select the stereotaxic space for your projection by 
selecting a volume.  The capture I sent earlier shows decent alignment between 
this volume and the MNI surface.

If there is an anatomical volume that aligns better with data you want to use 
with the resulting ROI volume, then you could also use that volume's 
stereotaxic space, but first make sure the MNI surface aligns well with it 
using the D/C: Volume outline/contours featuer (whatever is the last tab on the 
drop-down menu, when you have a volume loaded).

Donna


On Mar 20, 2014, at 8:54 PM, 沈庄明 zhms...@ion.ac.cn wrote:

 Hi Donna,
 
 Thanks for your response. I have a paint type data in MNI space. The data 
 corresponds to the surface with 40962 nodes. Actually, I want to convert the 
 surface-based paint type data to a volume-based paint type data. I found 
 Caret could do the task by Copy Surface Paint Column to Paint Volume, but I 
 need to select the volume space. So I hope you could give me some advice on 
 the selection. 
  Cheers,
 
 Zhuangming Shen
 
 
  -原始邮件-
  发件人: Donna Dierker do...@brainvis.wustl.edu
  发送时间: 2014年3月20日 星期四
  收件人: Caret, SureFit, and SuMS software users 
  caret-users@brainvis.wustl.edu
  抄送: 
  主题: Re: [caret-users] volume data corresponding to the MNI surface data
  
  This is a tough question that suggests that perhaps you are using 
  MNI_surf_reg.RIGHT.40962.coord as a mapping substrate.  It is possible the 
  MNI/CIVET folks have a recommended method for doing this, but I'm afraid I 
  don't know that it exists, much less be able to point you to it.  (Someone 
  else here might, though.)
  
  But if you're looking for something like that surface, in terms of the 
  tighter sulcal/gyral alignment and a larger pool of subjects, then I would 
  point you to Conte69 
  (http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas), 
  simply because then I can say FNIRT is the right choice, because that is 
  the spatial normalization method used to get those subjects into avg152T1 
  space.
  
  Could you provide more context for how you plan to use this surface?
  
  
  On Mar 20, 2014, at 1:36 AM, 沈庄明 zhms...@ion.ac.cn wrote:
  
   Hi Donna,
   
  Thanks for your prompt reply. By the way, which space (AFNI, FLIRT, 
   FLIRT-222, FNIRT, FNIRT-222, MRITOTAL, SPM, SPM95, SPM96, SPM2, SPM5, 
   T88, 711-2B, 711-2C, 711-2O, 711-2Y) should I choose for 
   mni_icbm152_t1_tal_nlin_sym_09a.nii ? Thanks again!
   
  Cheers,
   
   Zhuangming
   
   
   -原始邮件-
   发件人: Donna Dierker do...@brainvis.wustl.edu
   发送时间: 2014年3月20日 星期四
   收件人: Caret, SureFit, and SuMS software users 
   caret-users@brainvis.wustl.edu
   抄送: 
   主题: Re: [caret-users] volume data corresponding to the MNI surface data
   
   Hi Zhuangming,
   
   The mni_icbm152_t1_tal_nlin_sym_09a.nii file in this MNI template aligns 
   pretty well with the MNI surface:
   
   http://www.bic.mni.mcgill.ca/~vfonov/icbm/2009/mni_icbm152_nlin_sym_09a_nifti.zip
   
   See attached capture.  The underlay is the above T1 volume.  Two surface 
   contours are overlaid:
   
   blue - MNI_surf_reg.RIGHT.40962.coord
   red - 
   PALS_Human.PALS.RIGHT_AVG_B1-12.FIDUCIAL_FLIRT.REG-with-MNI_surf_reg.40962.coord
   
   The MNI surface was generated on the CIVET mesh from a population of 152 
   subjects.  The PALS surface was generated on the 74k_palsb12 mesh from a 
   population of 12 subjects.  Not surprisingly, the MNI surface aligns 
   better with the MNI T1 template.  There is a mean volume for the PALS 
   surface, but I'm guessing you are more interested in the MNI template.
   
   This spec file has both the MNI and PALS mean midthickness surfaces, 
   because it shows the results of registering one atlas to the other.
   
   Donna
   mni_surf_on_icbm152_nlin_sym_09a.png
   
   On Mar 19, 2014, at 2:09 AM, 沈庄明 zhms...@ion.ac.cn wrote:
   
   Hi everyone,
   I found a human surface data in MNI space 
   (MNI_surf_reg.BOTH.REG-with-PLAS.40962.spec) at 
   http://sumsdb.wustl.edu/sums/directory.do?id=6656001dir_name=MNI_surf_reg.
Is the volume data (i.e. the T1-weighted image) corresponding to the 
   human surface data available? Thanks!
   Cheers,
   
   Zhuangming Shen 
   
   
   
   
   
   
   ___
   caret-users mailing list
   caret

Re: [caret-users] volume data corresponding to the MNI surface data

2014-03-20 Thread Donna Dierker
This is a tough question that suggests that perhaps you are using 
MNI_surf_reg.RIGHT.40962.coord as a mapping substrate.  It is possible the 
MNI/CIVET folks have a recommended method for doing this, but I'm afraid I 
don't know that it exists, much less be able to point you to it.  (Someone else 
here might, though.)

But if you're looking for something like that surface, in terms of the tighter 
sulcal/gyral alignment and a larger pool of subjects, then I would point you to 
Conte69 (http://brainvis.wustl.edu/wiki/index.php/Caret:Atlases:Conte69_Atlas), 
simply because then I can say FNIRT is the right choice, because that is the 
spatial normalization method used to get those subjects into avg152T1 space.

Could you provide more context for how you plan to use this surface?


On Mar 20, 2014, at 1:36 AM, 沈庄明 zhms...@ion.ac.cn wrote:

 Hi Donna,
 
Thanks for your prompt reply. By the way, which space (AFNI, FLIRT, 
 FLIRT-222, FNIRT, FNIRT-222, MRITOTAL, SPM, SPM95, SPM96, SPM2, SPM5, T88, 
 711-2B, 711-2C, 711-2O, 711-2Y) should I choose for 
 mni_icbm152_t1_tal_nlin_sym_09a.nii ? Thanks again!
 
Cheers,
 
 Zhuangming
 
 
 -原始邮件-
 发件人: Donna Dierker do...@brainvis.wustl.edu
 发送时间: 2014年3月20日 星期四
 收件人: Caret, SureFit, and SuMS software users 
 caret-users@brainvis.wustl.edu
 抄送: 
 主题: Re: [caret-users] volume data corresponding to the MNI surface data
 
 Hi Zhuangming,
 
 The mni_icbm152_t1_tal_nlin_sym_09a.nii file in this MNI template aligns 
 pretty well with the MNI surface:
 
 http://www.bic.mni.mcgill.ca/~vfonov/icbm/2009/mni_icbm152_nlin_sym_09a_nifti.zip
 
 See attached capture.  The underlay is the above T1 volume.  Two surface 
 contours are overlaid:
 
 blue - MNI_surf_reg.RIGHT.40962.coord
 red - 
 PALS_Human.PALS.RIGHT_AVG_B1-12.FIDUCIAL_FLIRT.REG-with-MNI_surf_reg.40962.coord
 
 The MNI surface was generated on the CIVET mesh from a population of 152 
 subjects.  The PALS surface was generated on the 74k_palsb12 mesh from a 
 population of 12 subjects.  Not surprisingly, the MNI surface aligns better 
 with the MNI T1 template.  There is a mean volume for the PALS surface, but 
 I'm guessing you are more interested in the MNI template.
 
 This spec file has both the MNI and PALS mean midthickness surfaces, because 
 it shows the results of registering one atlas to the other.
 
 Donna
 mni_surf_on_icbm152_nlin_sym_09a.png
 
 On Mar 19, 2014, at 2:09 AM, 沈庄明 zhms...@ion.ac.cn wrote:
 
 Hi everyone,
 I found a human surface data in MNI space 
 (MNI_surf_reg.BOTH.REG-with-PLAS.40962.spec) at 
 http://sumsdb.wustl.edu/sums/directory.do?id=6656001dir_name=MNI_surf_reg. 
 Is the volume data (i.e. the T1-weighted image) corresponding to the human 
 surface data available? Thanks!
 Cheers,
 
 Zhuangming Shen 
 
 
 
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Distorted COMPMEDWALL

2014-02-18 Thread Donna Dierker
There were two problems in the previous email, one of which appears not to be 
sorted:

* hole in what looked like the medial wall, but which caret thought was ventral 
view
* surface does not appear to be in LPI orientation

Until the surface is in the right orientation, this isn't going to work.  The 
medial wall won't be where caret expects it to be.


On Feb 18, 2014, at 8:26 PM, Ahmad Khan 110ahmadk...@gmail.com wrote:

 Hi Donna,
 
 Thanks for the help. That problem has been sorted out, but now the 
 COMPMEDWALL orientation is like the figure
 given below (front view and back view) . Please help!
 
 With Best Regards
 Ahmad
 
 image.pngimage.png
 
  Ahmad Raza Khan
 (Postdoctoral fellow)
 Advanced Imaging Research Center (AIRC),Division of Neuroscience
 Oregon National Primate Research Center (ONPRC)
 Oregon Health and Science University (OSHU),Portland
 Oregon- 97006
 Ph no.503-614-3755
 Mob No.-503-799-7204
 
 
 On Thu, Feb 13, 2014 at 9:02 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Hi Ahmad,
 
 Even if your anatomical volume is in LPI orientation, I don't think your 
 surface is.  Does the segmentation volume used to generate that surface align 
 properly with the anatomical volume (T1/T2)?
 
 The attached capture shows what I see when I set your fiducial surface to 
 medial view (ferret_medial.jpg).  It doesn't look medial to me.
 
 But the source of your compressed medial wall woes is the hole shown in the 
 ventral view (ventral_hole.jpg).  I'm not sure this really is ventral; I 
 suspect it might be the real medial view, but it is closer to what you see 
 when you press the V button on the toolbar.  The fact that the subcortical 
 stuff isn't filled in the segmentation used to generate the surface means 
 your surface is more topologically equivalent to a sheet than a sphere, and 
 Caret needs the latter to do its CMW/registration thing.
 
 So can you fill the subcortical stuff and regenerate the surface, so it's 
 more sphere-like?
 
 Donna
 ferret_medial.jpg
 ventral_hole.jpg
 
 
 
 On Feb 12, 2014, at 4:54 PM, Ahmad Khan 110ahmadk...@gmail.com wrote:
 
 Hi Donna,
  
 I have uploaded the file as zip file name Data_Ferret.
 Thanks for your timely help.
  
 With Regards
 Ahmad
  Ahmad Raza Khan
 (Postdoctoral fellow)
 Advanced Imaging Research Center (AIRC),Division of Neuroscience
 Oregon National Primate Research Center (ONPRC)
 Oregon Health and Science University (OSHU),Portland
 Oregon- 97006
 Ph no.503-614-3755
 Mob No.-503-799-7204
 
 
 On Wed, Feb 12, 2014 at 6:41 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Can you trim your working directory down to a gigabyte zipped and upload it 
 here:
 
 http://pulvinar.wustl.edu/cgi-bin/upload.cgi
 
 The only other thing that comes to mind is that maybe your mesh density is 
 so high that inflation isn't taking out enough folds for the projection to 
 sphere / compressed medial wall thing to happen properly.
 
 Looking at the data seems like the most efficient way to troubleshoot.
 
 
 On Feb 11, 2014, at 6:19 PM, Ahmad Khan 110ahmadk...@gmail.com wrote:
 
  Hi,
 
  I am still getting the same problem and the orientation is
  x axis: increases left to right
  y axis: increases posterior to anterior
  z axis: increases inferior to superior.
  Surface don't have any topological defects or other defects.
 
  Please help!
 
 
  image.png
 
   Ahmad Raza Khan
  (Postdoctoral fellow)
  Advanced Imaging Research Center (AIRC),Division of Neuroscience
  Oregon National Primate Research Center (ONPRC)
  Oregon Health and Science University (OSHU),Portland
  Oregon- 97006
  Ph no.503-614-3755
  Mob No.-503-799-7204
 
 
  On Thu, Jan 16, 2014 at 5:42 PM, Donna Dierker 
  donna.dier...@sbcglobal.net wrote:
  When I see something like that, I wonder if your fiducial surface was in 
  the right orientation:
 
  x axis: increases left to right
  y axis: increases posterior to anterior
  z axis: increases inferior to superior
 
 
  On Jan 16, 2014, at 6:45 PM, Ahmad Khan kh...@ohsu.edu wrote:
 
   Hi,
  
   I am  wondering that despite of having good fiducial surface , when I 
   try to make inflated and ellipsoidal surface from fiducial.
   I get this type of COMPMEDWAL surface. Please help me in this regard.
  
  
   Thanks
   Ahmad
  
   image003.png
   ___
   caret-users mailing list
   caret-users@brainvis.wustl.edu
   http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users

Re: [caret-users] Distorted COMPMEDWALL

2014-02-12 Thread Donna Dierker
Can you trim your working directory down to a gigabyte zipped and upload it 
here:

http://pulvinar.wustl.edu/cgi-bin/upload.cgi

The only other thing that comes to mind is that maybe your mesh density is so 
high that inflation isn't taking out enough folds for the projection to sphere 
/ compressed medial wall thing to happen properly.

Looking at the data seems like the most efficient way to troubleshoot.


On Feb 11, 2014, at 6:19 PM, Ahmad Khan 110ahmadk...@gmail.com wrote:

 Hi,
 
 I am still getting the same problem and the orientation is 
 x axis: increases left to right
 y axis: increases posterior to anterior
 z axis: increases inferior to superior. 
 Surface don't have any topological defects or other defects.
 
 Please help!
 
 
 image.png
 
  Ahmad Raza Khan
 (Postdoctoral fellow)
 Advanced Imaging Research Center (AIRC),Division of Neuroscience
 Oregon National Primate Research Center (ONPRC)
 Oregon Health and Science University (OSHU),Portland
 Oregon- 97006
 Ph no.503-614-3755
 Mob No.-503-799-7204
 
 
 On Thu, Jan 16, 2014 at 5:42 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 When I see something like that, I wonder if your fiducial surface was in the 
 right orientation:
 
 x axis: increases left to right
 y axis: increases posterior to anterior
 z axis: increases inferior to superior
 
 
 On Jan 16, 2014, at 6:45 PM, Ahmad Khan kh...@ohsu.edu wrote:
 
  Hi,
 
  I am  wondering that despite of having good fiducial surface , when I try 
  to make inflated and ellipsoidal surface from fiducial.
  I get this type of COMPMEDWAL surface. Please help me in this regard.
 
 
  Thanks
  Ahmad
 
  image003.png
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Visualization in Caret

2014-02-08 Thread Donna Dierker
I emailed Eshita off-list yesterday, because the message contained 
pre-publication data.


On Feb 7, 2014, at 4:07 PM, Eshita Shah eshs...@ucla.edu wrote:

 Hi Donna, 
 
 I've uploaded the metric file. I overlaid it using the inflated and very 
 inflated coord files in Conte69 atlas (left hemisphere). 
 
 Thank you, 
 Eshita 
 
 
 On Fri, Feb 7, 2014 at 1:13 PM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Maybe I am the one who is mistaken, but I thought this is how these columns 
 behaved.  I would be more than happy to look at your *significan*metric if 
 you want to upload it:
 
 http://brainvis.wustl.edu/cgi-bin/upload.cgi
 
 
 Wow, I am jealous of your sample sizes!
 
 If you have only two groups, it is nice to see the polarity of the 
 difference, and now that you have composites (and have slogged through the 
 work of making your JRE work efficiently), it's just a matter of script 
 tweaking to get the t-test going.
 
 
 On Feb 7, 2014, at 2:39 PM, Eshita Shah eshs...@ucla.edu wrote:
 
 Donna,
 
 I may be doing something wrong, but when I change between the P and Q 
 columns in the threshold adjustment section and change the user threshold 
 to 0.95, 0.75, etc. as you suggested, everything remains the same. The 
 cluster sizes are not changing, they are the same as when I put the user 
 threshold to be 0.05. Is there anything else in my settings that may be 
 contributing to this error?
 
 I have 35 controls and 60 treatment subjects. I am looking into running the 
 two-sample t-test instead of anova.
 
 Thanks for your help!
 Eshita
 
 
 On Wed, Feb 5, 2014 at 3:54 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 On Feb 5, 2014, at 4:19 PM, Eshita Shah eshs...@ucla.edu wrote:
 
 Hi Donna,
 
 I have tried changing the user threshold in the Metric Settings menu, but 
 nothing seems to change beyond +/- 0.05. There are a few blotches of orange 
 and yellow when it is at 0, and many sub-threshold regions (green) show up 
 when I put it up to 0.05, but nothing else changes as I move beyond 0.05 to 
 higher values (the orange slowly disappears, and it's all green).
 
 At least in Caret5 (less sure about workbench), thresholding won't work 
 properly on the p-value, because thresholding assumes more extreme values -- 
 further from zero -- are the more exceptional ones, whereas the opposite is 
 true with p-values, where the closer to 0, the more rare.  Since q is 1-p it 
 should behave better in caret5 thresholding.  If you threshold at q=.95, you 
 should see less than if you threshold at q=.90.  Like percentiles.
 
 Is the value I'm changing the p or the q value? Or does that depend on what 
 column I have loaded in the Threshold Adjustment section?
 
 I'd display and threshold on both, for now, while you are trying to 
 understand what the data shows.
 
 If I am changing the q value, then does it mean that the regions that are 
 showing up have a p-value greater than 0.95 (since nothing changes after 
 0.05) and thus they're not showing up as significant in my report?
 
 If you threshold at q=.95, you should see vertices colored that have p 
 values of .05 or less, but you know none exist, because nothing survived in 
 your report.  Start at q=0.5.  See some vertices.  Probably lots of them.  
 Then try q=0.75.  You should see the clusters shrink now.  Now 0.90.  
 Anything?
 
 Let me know if I am interpreting this the wrong way.
 
 Also, the coloring somewhat changes depending on the color palette I use. I 
 believe the default is PSYCH, but when I change it to PSYCH-NO-NONE, I see 
 orange blots in more regions than before (and of course what was grey 
 earlier turns dark blue). Why is that? The new orange blots appear in the 
 same positions as the sub-threshold green color does when I change the user 
 threshold to 0.05.
 
 It is how the palette is defined.  There is a region of the color scale that 
 blots out coloring near zero, while the NO-NONE removes that gap.  While my 
 memory fails me as to why,, I remember thinking there was something not 
 quite intuitive about the one with the gap. Palettes are a matter of taste 
 to some degree.  Some are better with pos/neg values, while others are 
 better with positive only, which is what you will have with your f-stats.  
 For figures, I don't use p/q-values typically, but rather t- or f-maps.
 
 But for right now, you're doing a post-mortem on your analysis to see how 
 close you were to having differences, so the q-maps will be useful for this 
 purpose.
 
 Lastly, how do I know which group is baseline and treatment? Does TFCE 
 automatically output the control group as the baseline, so the yellow would 
 indicate that the sulci are deeper in the treatment group vs. control? Or 
 the other way around?
 
 You used an ANOVA, which should produce a f-map -- all positive.  There 
 should be no +/- valence to it, unless I'm misunderstanding what you did.
 
 Out of curiosity, how many subjects were in each group?
 
 If you have only two

Re: [caret-users] Errors in Stage-3.FS-to-F99.sh: Expected '' or '/', but got '[0-9]'

2014-02-07 Thread Donna Dierker
I checked, and there was some off-list exchange between me and the poster, but 
it didn't address this error, per se.

The solution for this user was to use mri_convert to reorient rawavg.mgz, which 
was in sphinx orientation, the translate per 
http://brainvis.wustl.edu/pipermail/caret-users/2012-July/005618.html.

It's not at all clear to me how that relates to the scene creation errors 
below, but I'm summarizing the off-list exhchange.


On Feb 7, 2014, at 8:23 AM, Caspar M. Schwiedrzik 
cschwie...@mail.rockefeller.edu wrote:

 Hi! 
 
 I am running into some error messages try to align a Freesurfer surface to 
 the F99. 
 The errors arise in the Stage-3.FS-to-F99.sh script and have been described 
 on the list before:
 https://www.mail-archive.com/caret-users@brainvis.wustl.edu/msg02854.html
 
 Specifically, I am getting the following messages:
 
 BrainSet construction error: Expected '' or '/', but got '[0-9]'.
 SURFACE DISTORTION ERROR: unable to find second surface.
 
 and 
 
 SCENE CREATION ERROR: Expected '' or '/', but got '[0-9]'.
 Expected '' or '/', but got '[0-9]'.
 Expected '' or '/', but got '[0-9]'.
 
 Unfortunately, the previous thread did not provide a solution. 
 Is there a known way to get around this? 
 Thanks, Caspar
 
 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Visualization in Caret

2014-02-07 Thread Donna Dierker
Maybe I am the one who is mistaken, but I thought this is how these columns 
behaved.  I would be more than happy to look at your *significan*metric if you 
want to upload it:

 http://brainvis.wustl.edu/cgi-bin/upload.cgi


Wow, I am jealous of your sample sizes!

If you have only two groups, it is nice to see the polarity of the difference, 
and now that you have composites (and have slogged through the work of making 
your JRE work efficiently), it's just a matter of script tweaking to get the 
t-test going.


On Feb 7, 2014, at 2:39 PM, Eshita Shah eshs...@ucla.edu wrote:

 Donna, 
 
 I may be doing something wrong, but when I change between the P and Q columns 
 in the threshold adjustment section and change the user threshold to 0.95, 
 0.75, etc. as you suggested, everything remains the same. The cluster sizes 
 are not changing, they are the same as when I put the user threshold to be 
 0.05. Is there anything else in my settings that may be contributing to this 
 error? 
 
 I have 35 controls and 60 treatment subjects. I am looking into running the 
 two-sample t-test instead of anova.
 
 Thanks for your help!
 Eshita 
 
 
 On Wed, Feb 5, 2014 at 3:54 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 On Feb 5, 2014, at 4:19 PM, Eshita Shah eshs...@ucla.edu wrote:
 
  Hi Donna,
 
  I have tried changing the user threshold in the Metric Settings menu, but 
  nothing seems to change beyond +/- 0.05. There are a few blotches of orange 
  and yellow when it is at 0, and many sub-threshold regions (green) show up 
  when I put it up to 0.05, but nothing else changes as I move beyond 0.05 to 
  higher values (the orange slowly disappears, and it's all green).
 
 At least in Caret5 (less sure about workbench), thresholding won't work 
 properly on the p-value, because thresholding assumes more extreme values -- 
 further from zero -- are the more exceptional ones, whereas the opposite is 
 true with p-values, where the closer to 0, the more rare.  Since q is 1-p it 
 should behave better in caret5 thresholding.  If you threshold at q=.95, you 
 should see less than if you threshold at q=.90.  Like percentiles.
 
  Is the value I'm changing the p or the q value? Or does that depend on what 
  column I have loaded in the Threshold Adjustment section?
 
 I'd display and threshold on both, for now, while you are trying to 
 understand what the data shows.
 
  If I am changing the q value, then does it mean that the regions that are 
  showing up have a p-value greater than 0.95 (since nothing changes after 
  0.05) and thus they're not showing up as significant in my report?
 
 If you threshold at q=.95, you should see vertices colored that have p values 
 of .05 or less, but you know none exist, because nothing survived in your 
 report.  Start at q=0.5.  See some vertices.  Probably lots of them.  Then 
 try q=0.75.  You should see the clusters shrink now.  Now 0.90.  Anything?
 
  Let me know if I am interpreting this the wrong way.
 
  Also, the coloring somewhat changes depending on the color palette I use. I 
  believe the default is PSYCH, but when I change it to PSYCH-NO-NONE, I see 
  orange blots in more regions than before (and of course what was grey 
  earlier turns dark blue). Why is that? The new orange blots appear in the 
  same positions as the sub-threshold green color does when I change the user 
  threshold to 0.05.
 
 It is how the palette is defined.  There is a region of the color scale that 
 blots out coloring near zero, while the NO-NONE removes that gap.  While my 
 memory fails me as to why,, I remember thinking there was something not quite 
 intuitive about the one with the gap. Palettes are a matter of taste to some 
 degree.  Some are better with pos/neg values, while others are better with 
 positive only, which is what you will have with your f-stats.  For figures, I 
 don't use p/q-values typically, but rather t- or f-maps.
 
 But for right now, you're doing a post-mortem on your analysis to see how 
 close you were to having differences, so the q-maps will be useful for this 
 purpose.
 
  Lastly, how do I know which group is baseline and treatment? Does TFCE 
  automatically output the control group as the baseline, so the yellow would 
  indicate that the sulci are deeper in the treatment group vs. control? Or 
  the other way around?
 
 You used an ANOVA, which should produce a f-map -- all positive.  There 
 should be no +/- valence to it, unless I'm misunderstanding what you did.
 
 Out of curiosity, how many subjects were in each group?
 
 If you have only two groups and want to see where one group is deeper than 
 the other, you can run a t-test instead of an anova.
 
  Thanks for your help,
  Eshita
 
 
 
  On Wed, Feb 5, 2014 at 5:58 AM, Donna Dierker donna.dier...@sbcglobal.net 
  wrote:
  Use the D/C: Metric Settings menu to adjust the threshold to .90, .85, etc. 
  until you start seeing something.  If you see nothing, set it to zero and 
  start

Re: [caret-users] Visualization in Caret

2014-02-05 Thread Donna Dierker
Use the D/C: Metric Settings menu to adjust the threshold to .90, .85, etc. 
until you start seeing something.  If you see nothing, set it to zero and start 
cranking up in larger increments.  Q=1-p.


On Feb 4, 2014, at 8:09 PM, Eshita Shah eshs...@ucla.edu wrote:

 Hi Donna, 
 
 What file specifically outputs the q-values and how far they are from 
 significance? I think I am able to load the Q statistic column from the f-map 
 onto the Conte69 atlas, but where should I be looking if I want to know what 
 to change the threshold to? 
 
 Thank you, 
 Eshita 
 
 
 On Tue, Feb 4, 2014 at 8:32 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 Yes, pretty much:  I usually have a study directory into which I copy the 
 Conte69 files.  Then I rename the Conte69 spec to something more 
 study-specific.  I usually use the Conte69 inflated and very inflated for 
 t-map visualization, along with mean group mid thickness (both medial/lateral 
 surface views, but also overlaid as contours on volume slices).
 
 I don't usually use the TFCE column for visualization, and if I recall 
 correctly, there might be p-value and q-value (1-p, which works better with 
 the Caret thresholding) columns.  This can tell you how close to significance 
 you got.
 
 And yes:  You use the D/C Overlay/Underlay surface menu to control what is 
 displayed, which column, etc.
 
 
 On Feb 3, 2014, at 6:10 PM, Eshita Shah eshs...@ucla.edu wrote:
 
  Yes, that's what I was afraid of. I was expecting significant differences 
  between the two groups. But thanks for clarifying.
 
  I am still a bit confused on how exactly to load the metric files on the 
  Conte69 atlas. Do I open up the Conte69 spec and add data files in the 
  menu to open up TFCE files? And then do I overlay it using D/C -- 
  Overlay/Underlay Surfaces -- Primary Overlay, etc.?
 
  Again, thank you for all your help.
 
  Eshita
 
 
  On Mon, Feb 3, 2014 at 3:49 PM, Donna Dierker donna.dier...@sbcglobal.net 
  wrote:
  No, I think the problem is that nothing survived TFCE thresholding.  If it 
  had, you would see an entry (or more) under the column heads (Column, 
  Thresh, Num-Nodes, etc.).  There is no entry, which means nothing survived.
 
  ColumnThresh  Num-Nodes  Area  Area-Corrected COG-X 
  COG-Y
COG-Z   P-Value
 
  TFCE   P
 
  You can try loading your f-map 
  (ANOVA_29-01-14.OCD_CTRL.Depth.LH.fmap.significant.tfce.1.0E.2.0H.73730.metric)
   and switch to the TFCE column, and apply thresholds corresponding to the 
  list of values right under the column heads, so you can see how close/far 
  you were.
 
  I am under the weather right now, so I will have another look at this 
  tomorrow, but I honestly think you are interpreting it correctly.  If you 
  are like me, you probably are disappointed with these results.  (There are 
  exceptions, of course.)
 
 
  On Feb 3, 2014, at 4:37 PM, Eshita Shah eshs...@ucla.edu wrote:
 
   Donna,
  
   Thank you so much for your thorough response. What I'm worried about as 
   of now is the significance.report.txt file. I have uploaded it using the 
   link you provided, please let me know if there is anything unusual. When 
   I ran ANOVA without TFCE, I had rows of information right below the 
   header, as you mentioned. But for the TFCE report, I don't see anything 
   similar. Maybe I am interpreting it incorrectly?
  
   Thank you,
   Eshita
  
  
   On Fri, Jan 31, 2014 at 1:15 PM, Donna Dierker 
   donna.dier...@sbcglobal.net wrote:
   On Jan 31, 2014, at 2:17 PM, Eshita Shah wrote:
  
   Hi Donna,
  
   Yes! I was able to successfully get past the issue of JRE halting-- I 
   just installed the latest JRE as Tim suggested, and added some options 
   for garbage collection so that it would optimize memory use. Thank you 
   for all your help!
  
   I have computed one mean midthickness for all my subjects, but 
   specifically how do I overlay that onto an anatomical template? Would 
   there be any advantage of using the NIFTI volume vs. using an average 
   volume created from my subject pool?
  
   One advantage of using the template used for stereotaxic/volumetric 
   registration, if any was done, is that it is standard.  Reviewers and 
   readers are more familiar with it, and don't have to understand how it 
   was generated.  This is just for display/orientation -- not for analysis.
  
   Another is that you don't have the extra step of computing a mean volume.
  
   f so, how would I be able to generate that average volume?
  
   I usually use AFNI's 3dMean when I need to do this, but FSL, SPM, and 
   other packages have similar features.  Maybe wb_command supports it now.  
   You can probably do it in multiple steps with caret_command, but it's a 
   pain.
  
   I am also a bit unclear on how to interpret and draw conclusions from 
   the outputs of TFCE. I understand that TFCE creates many .metric files 
   including one that indicates all the significant differences

Re: [caret-users] display myelin mapping results

2014-02-05 Thread Donna Dierker
Note that after you do File: Open Data File and load MyelinMapping.metric, you 
also have to select D/C: Overlay/Underlay - Surface and change the primary 
overlay to the MyelinMapping column.  This won't change just by loading the 
file.  You may have done this, but just making sure.  I wouldn't expect these 
maps to look alike.

I really don't know about T1wDividedByT2w's orientation.  I'm not as familiar 
with that pipeline as I'd like.  There is a chance it inverts that way on 
purpose, to align with some canonical volume, but it's a stretch.

I will look at this again later with the data I do have and see if I see the 
same thing. 


On Feb 4, 2014, at 3:10 PM, Cheng, Hu huch...@indiana.edu wrote:

 Hi Donna,
 
 I was able to visualize the metric file (e.g. thickness.metric) on the 
 individual's surface, but I think there is a problem with the myelin mapping 
 result. I couldn't see any change after I loaded MyelinMapping.metric. I 
 tried to visualize T1w and T2w images, they aligned well with each other, but 
 not with T1wDividedByT2w, which is inverted in AP and SI directions. Do you 
 have any clues of what went wrong?
 Thanks!
 
 Hu 
 
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [do...@brainvis.wustl.edu]
 Sent: Thursday, January 30, 2014 11:22 AM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] display myelin mapping results
 
 Hmmm.  I inspected a directory here I know has been through myelin mapping, 
 and it has files named like the ones you list below, but it also has surface 
 files.  (Mine has both *surf.gii surfaces and *.coord.gii/*.topo.gii pairs.  
 May have had other processing.)  But I did confirm these surfaces are on 
 native mesh, which means you canNOT view them on the Conte69 surface.  They 
 are not on the same mesh.
 
 Spec files can be created and added to via script (e.g., wb_command or 
 caret_command, depending on whether you're using Caret5 or workbench), if all 
 you need to do is view the maps on the individuals' surfaces.
 
 There is a freesurfer_to_fs_LR script that uses the Freesurfer registration 
 to get your surfaces on 164k_fs_LR standard mesh.  It creates spec files that 
 can be used with Caret5, but they do not work with workbench.
 
 We will be releasing a version of the HCP pipeline to the public -- probably 
 sometime this year, but things are still being finalized, so it's not ready 
 to roll yet.  Those scripts will produce spec files that work with workbench.
 
 It's not clear how important the standard mesh is to you, but if you want to 
 do cross-subject analyses, you'll probably need it.
 
 
 On Jan 30, 2014, at 8:27 AM, Cheng, Hu huch...@indiana.edu wrote:
 
 Thank you Donna,
 
 The result is on individual's surface. I tried that command but got nothing. 
 There is no spec or scene file under the directory. As stated in the 
 document:
 The output files are:
 L.MyelinMapping.metric
 R.MyelinMapping.metric
 T1wDividedByT2w.nii.gz
 T1wDividedByT2w_ribbon.nii.gz
 
 These include a metric file for each hemisphere with these columns: a raw 
 myelin map (with no outlier correction) a corrected myelin map, a smoothed 
 myelin map, and a cortical thickness corrected for surface curvature.  
 Additional outputs are the T1w/T2w volume, and the same volume containing 
 only the voxels of the cortical ribbon.
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Wednesday, January 29, 2014 4:51 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] display myelin mapping results
 
 You could do that, but assuming the myelin mapping results output individual 
 myelin maps on the 164k_fs_LR standard mesh (or 32k), then you could 
 actually look at them on the Conte69 atlas (e.g., inflated surface).  You 
 could have multiple subjects' maps loaded and toggle from one to the other.
 
 But if you want to click on the maps and ID node spots on the midthickness 
 surface, for example, so you could see what the individual anatomy looks 
 like, and how its contours overlay on the T1/T2, then you're better off 
 using an individual spec file.
 
 To be honest, I'm not familiar with myelin mapping yet, but I am trying to 
 learn more about it.  Assuming you are on a Linux or MacOSX machine, could 
 you do this command:
 
 find /directory/where/my/myelin/mapping/results/are/located | sort  
 /tmp/myelinoutputfiles.txt
 
 ... then upload the resulting /tmp/myelinoutputfiles.txt here:
 
 http://pulvinar.wustl.edu/cgi-bin/upload.cgi
 
 I'm wondering if spec or scene files already exist, and want to rule it out 
 before you generate your own.
 
 
 On Jan 29, 2014, at 2:59 PM, Cheng, Hu huch...@indiana.edu wrote:
 
 Hi Donna,
 
 I followed the procedures in Myelin_Mapping_Documentation_v2.doc

Re: [caret-users] display myelin mapping results

2014-02-05 Thread Donna Dierker
I'm not sure about the brain extraction.  Seems worth a try.

Unfortunately, I don't have permission to pass that data onto you; however, I 
did check in the structural directory of one of the Human Connectome Project 
(HCP), and looked at both of these files:

SUBJECT/T1w/T1wDividedByT2w.nii.gz
SUBJECT/T1w/T1wDividedByT2w_ribbon.nii.gz

Both appeared to be in the same orientation as the T1/T2 (i.e., x increases 
left to right, y increases posterior to anterior, y increases inferior to 
superior).  So if your counterparts are not like that, then something may be 
amiss.


On Feb 5, 2014, at 8:52 AM, Cheng, Hu huch...@indiana.edu wrote:

 Thanks Donna. Yes, I did use overlay, I was able to see the thickness.metric.
 I'm not sure if I need to do brain extraction on T1w and T2w first. I didn't 
 do that. Is there anyway I can get one of your original T1w and T2w images so 
 that I can repeat the procedures on your data? 
 
 Hu
 
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [do...@brainvis.wustl.edu]
 Sent: Wednesday, February 05, 2014 9:42 AM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] display myelin mapping results
 
 Note that after you do File: Open Data File and load MyelinMapping.metric, 
 you also have to select D/C: Overlay/Underlay - Surface and change the 
 primary overlay to the MyelinMapping column.  This won't change just by 
 loading the file.  You may have done this, but just making sure.  I wouldn't 
 expect these maps to look alike.
 
 I really don't know about T1wDividedByT2w's orientation.  I'm not as familiar 
 with that pipeline as I'd like.  There is a chance it inverts that way on 
 purpose, to align with some canonical volume, but it's a stretch.
 
 I will look at this again later with the data I do have and see if I see the 
 same thing.
 
 
 On Feb 4, 2014, at 3:10 PM, Cheng, Hu huch...@indiana.edu wrote:
 
 Hi Donna,
 
 I was able to visualize the metric file (e.g. thickness.metric) on the 
 individual's surface, but I think there is a problem with the myelin mapping 
 result. I couldn't see any change after I loaded MyelinMapping.metric. I 
 tried to visualize T1w and T2w images, they aligned well with each other, 
 but not with T1wDividedByT2w, which is inverted in AP and SI directions. Do 
 you have any clues of what went wrong?
 Thanks!
 
 Hu
 
 
 From: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] on behalf of Donna Dierker 
 [do...@brainvis.wustl.edu]
 Sent: Thursday, January 30, 2014 11:22 AM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] display myelin mapping results
 
 Hmmm.  I inspected a directory here I know has been through myelin mapping, 
 and it has files named like the ones you list below, but it also has surface 
 files.  (Mine has both *surf.gii surfaces and *.coord.gii/*.topo.gii pairs.  
 May have had other processing.)  But I did confirm these surfaces are on 
 native mesh, which means you canNOT view them on the Conte69 surface.  They 
 are not on the same mesh.
 
 Spec files can be created and added to via script (e.g., wb_command or 
 caret_command, depending on whether you're using Caret5 or workbench), if 
 all you need to do is view the maps on the individuals' surfaces.
 
 There is a freesurfer_to_fs_LR script that uses the Freesurfer registration 
 to get your surfaces on 164k_fs_LR standard mesh.  It creates spec files 
 that can be used with Caret5, but they do not work with workbench.
 
 We will be releasing a version of the HCP pipeline to the public -- probably 
 sometime this year, but things are still being finalized, so it's not ready 
 to roll yet.  Those scripts will produce spec files that work with workbench.
 
 It's not clear how important the standard mesh is to you, but if you want to 
 do cross-subject analyses, you'll probably need it.
 
 
 On Jan 30, 2014, at 8:27 AM, Cheng, Hu huch...@indiana.edu wrote:
 
 Thank you Donna,
 
 The result is on individual's surface. I tried that command but got 
 nothing. There is no spec or scene file under the directory. As stated in 
 the document:
 The output files are:
 L.MyelinMapping.metric
 R.MyelinMapping.metric
 T1wDividedByT2w.nii.gz
 T1wDividedByT2w_ribbon.nii.gz
 
 These include a metric file for each hemisphere with these columns: a raw 
 myelin map (with no outlier correction) a corrected myelin map, a smoothed 
 myelin map, and a cortical thickness corrected for surface curvature.  
 Additional outputs are the T1w/T2w volume, and the same volume containing 
 only the voxels of the cortical ribbon.
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Wednesday, January 29, 2014 4:51 PM
 To: Caret, SureFit, and SuMS

Re: [caret-users] Visualization in Caret

2014-02-05 Thread Donna Dierker
On Feb 5, 2014, at 4:19 PM, Eshita Shah eshs...@ucla.edu wrote:

 Hi Donna, 
 
 I have tried changing the user threshold in the Metric Settings menu, but 
 nothing seems to change beyond +/- 0.05. There are a few blotches of orange 
 and yellow when it is at 0, and many sub-threshold regions (green) show up 
 when I put it up to 0.05, but nothing else changes as I move beyond 0.05 to 
 higher values (the orange slowly disappears, and it's all green). 

At least in Caret5 (less sure about workbench), thresholding won't work 
properly on the p-value, because thresholding assumes more extreme values -- 
further from zero -- are the more exceptional ones, whereas the opposite is 
true with p-values, where the closer to 0, the more rare.  Since q is 1-p it 
should behave better in caret5 thresholding.  If you threshold at q=.95, you 
should see less than if you threshold at q=.90.  Like percentiles.

 Is the value I'm changing the p or the q value? Or does that depend on what 
 column I have loaded in the Threshold Adjustment section?

I'd display and threshold on both, for now, while you are trying to understand 
what the data shows.

 If I am changing the q value, then does it mean that the regions that are 
 showing up have a p-value greater than 0.95 (since nothing changes after 
 0.05) and thus they're not showing up as significant in my report?

If you threshold at q=.95, you should see vertices colored that have p values 
of .05 or less, but you know none exist, because nothing survived in your 
report.  Start at q=0.5.  See some vertices.  Probably lots of them.  Then try 
q=0.75.  You should see the clusters shrink now.  Now 0.90.  Anything?

 Let me know if I am interpreting this the wrong way. 
 
 Also, the coloring somewhat changes depending on the color palette I use. I 
 believe the default is PSYCH, but when I change it to PSYCH-NO-NONE, I see 
 orange blots in more regions than before (and of course what was grey earlier 
 turns dark blue). Why is that? The new orange blots appear in the same 
 positions as the sub-threshold green color does when I change the user 
 threshold to 0.05. 

It is how the palette is defined.  There is a region of the color scale that 
blots out coloring near zero, while the NO-NONE removes that gap.  While my 
memory fails me as to why,, I remember thinking there was something not quite 
intuitive about the one with the gap. Palettes are a matter of taste to some 
degree.  Some are better with pos/neg values, while others are better with 
positive only, which is what you will have with your f-stats.  For figures, I 
don't use p/q-values typically, but rather t- or f-maps.

But for right now, you're doing a post-mortem on your analysis to see how close 
you were to having differences, so the q-maps will be useful for this purpose.

 Lastly, how do I know which group is baseline and treatment? Does TFCE 
 automatically output the control group as the baseline, so the yellow would 
 indicate that the sulci are deeper in the treatment group vs. control? Or the 
 other way around? 

You used an ANOVA, which should produce a f-map -- all positive.  There should 
be no +/- valence to it, unless I'm misunderstanding what you did.

Out of curiosity, how many subjects were in each group?

If you have only two groups and want to see where one group is deeper than the 
other, you can run a t-test instead of an anova.

 Thanks for your help, 
 Eshita 
 
 
 
 On Wed, Feb 5, 2014 at 5:58 AM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 Use the D/C: Metric Settings menu to adjust the threshold to .90, .85, etc. 
 until you start seeing something.  If you see nothing, set it to zero and 
 start cranking up in larger increments.  Q=1-p.
 
 
 On Feb 4, 2014, at 8:09 PM, Eshita Shah eshs...@ucla.edu wrote:
 
  Hi Donna,
 
  What file specifically outputs the q-values and how far they are from 
  significance? I think I am able to load the Q statistic column from the 
  f-map onto the Conte69 atlas, but where should I be looking if I want to 
  know what to change the threshold to?
 
  Thank you,
  Eshita
 
 
  On Tue, Feb 4, 2014 at 8:32 AM, Donna Dierker do...@brainvis.wustl.edu 
  wrote:
  Yes, pretty much:  I usually have a study directory into which I copy the 
  Conte69 files.  Then I rename the Conte69 spec to something more 
  study-specific.  I usually use the Conte69 inflated and very inflated for 
  t-map visualization, along with mean group mid thickness (both 
  medial/lateral surface views, but also overlaid as contours on volume 
  slices).
 
  I don't usually use the TFCE column for visualization, and if I recall 
  correctly, there might be p-value and q-value (1-p, which works better with 
  the Caret thresholding) columns.  This can tell you how close to 
  significance you got.
 
  And yes:  You use the D/C Overlay/Underlay surface menu to control what is 
  displayed, which column, etc.
 
 
  On Feb 3, 2014, at 6:10 PM, Eshita Shah eshs...@ucla.edu wrote

Re: [caret-users] Visualization in Caret

2014-02-03 Thread Donna Dierker
No, I think the problem is that nothing survived TFCE thresholding.  If it had, 
you would see an entry (or more) under the column heads (Column, Thresh, 
Num-Nodes, etc.).  There is no entry, which means nothing survived.

ColumnThresh  Num-Nodes  Area  Area-Corrected COG-X COG-Y   
  COG-Z   P-Value

TFCE   P

You can try loading your f-map 
(ANOVA_29-01-14.OCD_CTRL.Depth.LH.fmap.significant.tfce.1.0E.2.0H.73730.metric) 
and switch to the TFCE column, and apply thresholds corresponding to the list 
of values right under the column heads, so you can see how close/far you were.

I am under the weather right now, so I will have another look at this tomorrow, 
but I honestly think you are interpreting it correctly.  If you are like me, 
you probably are disappointed with these results.  (There are exceptions, of 
course.)


On Feb 3, 2014, at 4:37 PM, Eshita Shah eshs...@ucla.edu wrote:

 Donna,
 
 Thank you so much for your thorough response. What I'm worried about as of 
 now is the significance.report.txt file. I have uploaded it using the link 
 you provided, please let me know if there is anything unusual. When I ran 
 ANOVA without TFCE, I had rows of information right below the header, as you 
 mentioned. But for the TFCE report, I don't see anything similar. Maybe I am 
 interpreting it incorrectly?
 
 Thank you, 
 Eshita 
 
 
 On Fri, Jan 31, 2014 at 1:15 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 On Jan 31, 2014, at 2:17 PM, Eshita Shah wrote:
 
 Hi Donna, 
 
 Yes! I was able to successfully get past the issue of JRE halting-- I just 
 installed the latest JRE as Tim suggested, and added some options for 
 garbage collection so that it would optimize memory use. Thank you for all 
 your help! 
 
 I have computed one mean midthickness for all my subjects, but specifically 
 how do I overlay that onto an anatomical template? Would there be any 
 advantage of using the NIFTI volume vs. using an average volume created from 
 my subject pool?
 
 One advantage of using the template used for stereotaxic/volumetric 
 registration, if any was done, is that it is standard.  Reviewers and readers 
 are more familiar with it, and don't have to understand how it was generated. 
  This is just for display/orientation -- not for analysis.
 
 Another is that you don't have the extra step of computing a mean volume.
 
 f so, how would I be able to generate that average volume? 
 
 I usually use AFNI's 3dMean when I need to do this, but FSL, SPM, and other 
 packages have similar features.  Maybe wb_command supports it now.  You can 
 probably do it in multiple steps with caret_command, but it's a pain.
 
 I am also a bit unclear on how to interpret and draw conclusions from the 
 outputs of TFCE. I understand that TFCE creates many .metric files including 
 one that indicates all the significant differences between the two groups. 
 How can I overlay that (along with the .label file) onto a surface in Caret?
 
 I usually generate a border about the cluster in the label.gii file and 
 overlay it on the unthresholded t-map, so that users can see subthreshold 
 diffs.  I display the t-map on the inflated atlas surface (Conte69, if I 
 recall correctly here).  If there are diffs in the insula/operculum, i use 
 the very inflated surface, which shows them more clearly.
 
 (Where does the mean midthickness come into play?)
 
 Sometimes it is evident just by comparing the mean midthickness surfaces that 
 there is a difference.  Other times, you need to look at a slice view of the 
 template with group contours overlaid at a slice that best shows the diffs.  
 Could be coronal, axial, or sagittal.
 
 Also, how do I interpret the results written in the significance.report text 
 file? 
 
 If you upload your report, I can tell you the lines to focus on:
 
 http://brainvis.wustl.edu/cgi-bin/upload.cgi
 
 They should be near the top, just below a header that lists the column, 
 number of nodes, corrected and uncorrected areas, x, y, z, etc.  I'm psyched 
 you got this far!  I was feeling frustrated after you ran into the JRE 
 problem.  I'm glad you got past it.
 
 Thank you so much. 
 
 Sincerely, 
 Eshita 
 
 
 On Thu, Jan 30, 2014 at 5:17 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 Wow, does this mean you got past the grind-to-a-halt JRE problem?  Excellent!
 
 Here is a script I used to compute mean midthickness surfaces for two groups:
 
 http://brainmap.wustl.edu/pub/donna/US/UCLA/ESHITA/gen_mean_fiducials.pared.sh
 login pub
 password download
 
 But the main command is this one:
 
 caret_command -surface-average $OUTCOORD $COORD1 $COORD2 … $COORDn $SHAPE
 
 The $SHAPE is a vertex:scalar mapping identical in format to a metric, but 
 it stores the 3D variability for each vertex.
 
 You can visualize multiple mean coord files (e.g., one for each DX group) 
 overlaid on the same anatomical volume (e.g., avg152T1) and click on hot 
 spots on your metric, to see

Re: [caret-users] Freesurfer to F99 without volume?

2014-01-31 Thread Donna Dierker
I assume you mean this tutorial:

FS-to-F99_Tutorial_Sept10.doc
http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=8285962

I confess I'm not as familiar with the details of that one, but my hunch is 
that the volume is being used here not to drive registration directly, in the 
way it would a volumetric registration to some stereotaxic space.  Rather, it 
is being used here to help you pinpoint the murky medial wall borders.  Often 
the dorsal border is clear from the CC or callosal sulcus.  But it can be 
tricky to know where to place the medial wall ventral border.  If you can 
somehow find a principled way to narrow it down from somewhere in the 
neighborhood of the hippocampus, then you can explain that in your methods as 
a deviation from the tutorial, owing to lack of anatomical volume.


On Jan 30, 2014, at 11:18 AM, Caspar M. Schwiedrzik 
cschwie...@mail.rockefeller.edu wrote:

 Hi Caret Experts, 
 I was wondering whether it is possible to bring results from a surface based 
 analysis in Freesurfer (v5.1) over to Caret into F99 space in case there is 
 no volume available. The results I would like to display are from a group 
 analysis that was entirely done in surface space, in particular they were 
 done by mapping the functional data to a custom surface template; there is no 
 equivalent volume available because the surface template was made iteratively 
 from a number of spheres, not from an average volume. 
 In the tutorial, there are a lot of references to the volume, but I am 
 uncertain whether this is used for registration purposes, or merely for 
 display. 
 Thanks!
 Caspar
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Visualization in Caret

2014-01-31 Thread Donna Dierker
On Jan 31, 2014, at 2:17 PM, Eshita Shah wrote:

 Hi Donna, 
 
 Yes! I was able to successfully get past the issue of JRE halting-- I just 
 installed the latest JRE as Tim suggested, and added some options for garbage 
 collection so that it would optimize memory use. Thank you for all your help! 
 
 I have computed one mean midthickness for all my subjects, but specifically 
 how do I overlay that onto an anatomical template? Would there be any 
 advantage of using the NIFTI volume vs. using an average volume created from 
 my subject pool?

One advantage of using the template used for stereotaxic/volumetric 
registration, if any was done, is that it is standard.  Reviewers and readers 
are more familiar with it, and don't have to understand how it was generated.  
This is just for display/orientation -- not for analysis.

Another is that you don't have the extra step of computing a mean volume.

 f so, how would I be able to generate that average volume? 

I usually use AFNI's 3dMean when I need to do this, but FSL, SPM, and other 
packages have similar features.  Maybe wb_command supports it now.  You can 
probably do it in multiple steps with caret_command, but it's a pain.

 I am also a bit unclear on how to interpret and draw conclusions from the 
 outputs of TFCE. I understand that TFCE creates many .metric files including 
 one that indicates all the significant differences between the two groups. 
 How can I overlay that (along with the .label file) onto a surface in Caret?

I usually generate a border about the cluster in the label.gii file and overlay 
it on the unthresholded t-map, so that users can see subthreshold diffs.  I 
display the t-map on the inflated atlas surface (Conte69, if I recall correctly 
here).  If there are diffs in the insula/operculum, i use the very inflated 
surface, which shows them more clearly.

 (Where does the mean midthickness come into play?)

Sometimes it is evident just by comparing the mean midthickness surfaces that 
there is a difference.  Other times, you need to look at a slice view of the 
template with group contours overlaid at a slice that best shows the diffs.  
Could be coronal, axial, or sagittal.

 Also, how do I interpret the results written in the significance.report text 
 file? 

If you upload your report, I can tell you the lines to focus on:

http://brainvis.wustl.edu/cgi-bin/upload.cgi

They should be near the top, just below a header that lists the column, number 
of nodes, corrected and uncorrected areas, x, y, z, etc.  I'm psyched you got 
this far!  I was feeling frustrated after you ran into the JRE problem.  I'm 
glad you got past it.

 Thank you so much. 
 
 Sincerely, 
 Eshita 
 
 
 On Thu, Jan 30, 2014 at 5:17 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 Wow, does this mean you got past the grind-to-a-halt JRE problem?  Excellent!
 
 Here is a script I used to compute mean midthickness surfaces for two groups:
 
 http://brainmap.wustl.edu/pub/donna/US/UCLA/ESHITA/gen_mean_fiducials.pared.sh
 login pub
 password download
 
 But the main command is this one:
 
 caret_command -surface-average $OUTCOORD $COORD1 $COORD2 … $COORDn $SHAPE
 
 The $SHAPE is a vertex:scalar mapping identical in format to a metric, but it 
 stores the 3D variability for each vertex.
 
 You can visualize multiple mean coord files (e.g., one for each DX group) 
 overlaid on the same anatomical volume (e.g., avg152T1) and click on hot 
 spots on your metric, to see if the contours diverge there.  You can also 
 compute the distance between the two surfaces directly on the Surface: 
 Measures menu (if I recall correctly).
 
 Sounds like you're making great progress!
 
 
 On Jan 30, 2014, at 5:27 PM, Eshita Shah eshs...@ucla.edu wrote:
 
  Hello,
 
  I have created metric files from my TFCE statistical analysis that I wish 
  to view on my own study-specific generated average coordinate file. How 
  would I go about doing so? I do have the Conte69 Visualization Atlas, but I 
  am not sure how to overlay the metric files generated by TFCE to visualize 
  significant clusters. I would eventually like to do this overlay on my own 
  average file, not the 164k averages.
 
  Thank you,
  Eshita
 
  --
  Eshita Shah
  University of California, Los Angeles | 2014
  B.S. Neuroscience
  eshs...@ucla.edu
 
 
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 
 -- 
 Eshita Shah
 University of California, Los Angeles | 2014
 B.S. Neuroscience 
 eshs...@ucla.edu 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users

___
caret

Re: [caret-users] display myelin mapping results

2014-01-30 Thread Donna Dierker
Hmmm.  I inspected a directory here I know has been through myelin mapping, and 
it has files named like the ones you list below, but it also has surface files. 
 (Mine has both *surf.gii surfaces and *.coord.gii/*.topo.gii pairs.  May have 
had other processing.)  But I did confirm these surfaces are on native mesh, 
which means you canNOT view them on the Conte69 surface.  They are not on the 
same mesh.

Spec files can be created and added to via script (e.g., wb_command or 
caret_command, depending on whether you're using Caret5 or workbench), if all 
you need to do is view the maps on the individuals' surfaces.

There is a freesurfer_to_fs_LR script that uses the Freesurfer registration to 
get your surfaces on 164k_fs_LR standard mesh.  It creates spec files that can 
be used with Caret5, but they do not work with workbench.

We will be releasing a version of the HCP pipeline to the public -- probably 
sometime this year, but things are still being finalized, so it's not ready to 
roll yet.  Those scripts will produce spec files that work with workbench.

It's not clear how important the standard mesh is to you, but if you want to do 
cross-subject analyses, you'll probably need it.


On Jan 30, 2014, at 8:27 AM, Cheng, Hu huch...@indiana.edu wrote:

 Thank you Donna,
 
 The result is on individual's surface. I tried that command but got nothing. 
 There is no spec or scene file under the directory. As stated in the document:
 The output files are: 
 L.MyelinMapping.metric
 R.MyelinMapping.metric
 T1wDividedByT2w.nii.gz 
 T1wDividedByT2w_ribbon.nii.gz
 
 These include a metric file for each hemisphere with these columns: a raw 
 myelin map (with no outlier correction) a corrected myelin map, a smoothed 
 myelin map, and a cortical thickness corrected for surface curvature.  
 Additional outputs are the T1w/T2w volume, and the same volume containing 
 only the voxels of the cortical ribbon.   
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Wednesday, January 29, 2014 4:51 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] display myelin mapping results
 
 You could do that, but assuming the myelin mapping results output individual 
 myelin maps on the 164k_fs_LR standard mesh (or 32k), then you could actually 
 look at them on the Conte69 atlas (e.g., inflated surface).  You could have 
 multiple subjects' maps loaded and toggle from one to the other.
 
 But if you want to click on the maps and ID node spots on the midthickness 
 surface, for example, so you could see what the individual anatomy looks 
 like, and how its contours overlay on the T1/T2, then you're better off using 
 an individual spec file.
 
 To be honest, I'm not familiar with myelin mapping yet, but I am trying to 
 learn more about it.  Assuming you are on a Linux or MacOSX machine, could 
 you do this command:
 
 find /directory/where/my/myelin/mapping/results/are/located | sort  
 /tmp/myelinoutputfiles.txt
 
 ... then upload the resulting /tmp/myelinoutputfiles.txt here:
 
 http://pulvinar.wustl.edu/cgi-bin/upload.cgi
 
 I'm wondering if spec or scene files already exist, and want to rule it out 
 before you generate your own.
 
 
 On Jan 29, 2014, at 2:59 PM, Cheng, Hu huch...@indiana.edu wrote:
 
 Hi Donna,
 
 I followed the procedures in Myelin_Mapping_Documentation_v2.doc and 
 finished processing my own data. I just wonder how to display the results 
 just as viewing conte69 results. Should I copy their spec file and replace 
 all the files?
 Thank you very much! 
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna 
 Dierker
 Sent: Monday, January 20, 2014 6:18 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] only got one hemisphere from surefit
 
 SureFit only segments one hemisphere at a time.  You need to crop to a left 
 or right hemisphere and run them separately.
 
 
 On Jan 20, 2014, at 3:36 PM, Cheng, Hu huch...@indiana.edu wrote:
 
 Dear Caret User,
 
 I'm running Caret on a 64-bit Windows. I tried to segment an individual's 
 T1w anatomy using surefit. I set the origin at AC and select both in 
 structure.  However, I only got left hemisphere segmented. The inflated 
 surface is only half of the brain. What did I do wrong?
 Thanks for your help!
 
 Hu Cheng, Ph.D., DABMP
 MRI Physicist, Imaging Research Facility Department of Psychological 
 and Brain Sciences Indiana University Bloomington, IN 47405 Tel.
 812-856-2518 Fax. 812-855-4691
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http

Re: [caret-users] Visualization in Caret

2014-01-30 Thread Donna Dierker
Wow, does this mean you got past the grind-to-a-halt JRE problem?  Excellent!

Here is a script I used to compute mean midthickness surfaces for two groups:

http://brainmap.wustl.edu/pub/donna/US/UCLA/ESHITA/gen_mean_fiducials.pared.sh
login pub
password download

But the main command is this one:

caret_command -surface-average $OUTCOORD $COORD1 $COORD2 … $COORDn $SHAPE

The $SHAPE is a vertex:scalar mapping identical in format to a metric, but it 
stores the 3D variability for each vertex.

You can visualize multiple mean coord files (e.g., one for each DX group) 
overlaid on the same anatomical volume (e.g., avg152T1) and click on hot spots 
on your metric, to see if the contours diverge there.  You can also compute the 
distance between the two surfaces directly on the Surface: Measures menu (if I 
recall correctly).

Sounds like you're making great progress!


On Jan 30, 2014, at 5:27 PM, Eshita Shah eshs...@ucla.edu wrote:

 Hello, 
 
 I have created metric files from my TFCE statistical analysis that I wish to 
 view on my own study-specific generated average coordinate file. How would I 
 go about doing so? I do have the Conte69 Visualization Atlas, but I am not 
 sure how to overlay the metric files generated by TFCE to visualize 
 significant clusters. I would eventually like to do this overlay on my own 
 average file, not the 164k averages. 
 
 Thank you,
 Eshita 
 
 -- 
 Eshita Shah
 University of California, Los Angeles | 2014
 B.S. Neuroscience 
 eshs...@ucla.edu 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Create Foci

2014-01-29 Thread Donna Dierker
Can you upload your cdv file here (just one of them):

http://pulvinar.wustl.edu/cgi-bin/upload.cgi

And tell us which atlas/tutorial you were using for mapping the foci (e.g., 
PALS or Conte69)?


On Jan 28, 2014, at 3:43 PM, Lauri lort...@yahoo.com wrote:

 Dear caret-users,
 
 I am trying to create a map with a lot of different foci from many different 
 studies, I have tried to do it with the two different .csv files, the color 
 and the coordinates one with out any luck; my second option was creating one 
 by one all the foci, I don’t have more than 100 hundred so it won’t be a 
 waste of time doing it one by one, but I can’t project them with out 
 displaying other foci from the studies previously included. Could you provide 
 me with the instructions to create single foci?
 
 Thank you very much!
 L
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] display myelin mapping results

2014-01-29 Thread Donna Dierker
You could do that, but assuming the myelin mapping results output individual 
myelin maps on the 164k_fs_LR standard mesh (or 32k), then you could actually 
look at them on the Conte69 atlas (e.g., inflated surface).  You could have 
multiple subjects' maps loaded and toggle from one to the other.

But if you want to click on the maps and ID node spots on the midthickness 
surface, for example, so you could see what the individual anatomy looks like, 
and how its contours overlay on the T1/T2, then you're better off using an 
individual spec file.

To be honest, I'm not familiar with myelin mapping yet, but I am trying to 
learn more about it.  Assuming you are on a Linux or MacOSX machine, could you 
do this command:

find /directory/where/my/myelin/mapping/results/are/located | sort  
/tmp/myelinoutputfiles.txt

… then upload the resulting /tmp/myelinoutputfiles.txt here:

http://pulvinar.wustl.edu/cgi-bin/upload.cgi

I'm wondering if spec or scene files already exist, and want to rule it out 
before you generate your own.


On Jan 29, 2014, at 2:59 PM, Cheng, Hu huch...@indiana.edu wrote:

 Hi Donna,
 
 I followed the procedures in Myelin_Mapping_Documentation_v2.doc and finished 
 processing my own data. I just wonder how to display the results just as 
 viewing conte69 results. Should I copy their spec file and replace all the 
 files?
 Thank you very much! 
 
 Regards,
 
 Hu
 
 
 -Original Message-
 From: caret-users-boun...@brainvis.wustl.edu 
 [mailto:caret-users-boun...@brainvis.wustl.edu] On Behalf Of Donna Dierker
 Sent: Monday, January 20, 2014 6:18 PM
 To: Caret, SureFit, and SuMS software users
 Subject: Re: [caret-users] only got one hemisphere from surefit
 
 SureFit only segments one hemisphere at a time.  You need to crop to a left 
 or right hemisphere and run them separately.
 
 
 On Jan 20, 2014, at 3:36 PM, Cheng, Hu huch...@indiana.edu wrote:
 
 Dear Caret User,
 
 I'm running Caret on a 64-bit Windows. I tried to segment an individual's 
 T1w anatomy using surefit. I set the origin at AC and select both in 
 structure.  However, I only got left hemisphere segmented. The inflated 
 surface is only half of the brain. What did I do wrong?
 Thanks for your help!
 
 Hu Cheng, Ph.D., DABMP
 MRI Physicist, Imaging Research Facility Department of Psychological 
 and Brain Sciences Indiana University Bloomington, IN 47405 Tel. 
 812-856-2518 Fax. 812-855-4691
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Using caret_stats for TFCE

2014-01-23 Thread Donna Dierker
This sounds very much like a problem I had before switching JRE's.  This 
version has been known to work with Ubuntu 10.10, Ubuntu 10.10, Linux it 
2.6.35-32-generic #67-Ubuntu SMP Mon Mar 5 19:39:49 UTC 2012 x86_64 GNU/Linux:

java version 1.6.0_21
Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
Java HotSpot(TM) 64-Bit Server VM (build 17.0-b16, mixed mode)

Tim Coalson believed the problem was somehow related to a cache size.  Once it 
reached that cache size, then java performance plummeted, as you saw.  
Switching JRE's fixed the problem, and I don't have a non-java TFCE version I 
can send you.


On Jan 22, 2014, at 4:27 PM, Eshita Shah eshs...@ucla.edu wrote:

 Yes, it works fine at 1000 iterations. I am trying to run 5000. It seems to 
 get really slow after 1000, but continues on to about 2600 until it crashes. 
 
 
 On Wed, Jan 22, 2014 at 2:13 PM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 How many iterations did you specify?  Have you tried it with 1000 iterations?
 
 
 On Jan 22, 2014, at 3:09 PM, Eshita Shah eshs...@ucla.edu wrote:
 
  Hi Donna,
 
  I have been trying to allocate more memory to caret_stats so it can run 
  java properly without crashing. However, this has not worked. I just 
  downloaded the JRE you provided, and it is actually the same one that I am 
  using.
 
  Do you think there is another source of this problem?
 
  Thank you,
  Eshita
 
 
  On Wed, Jan 15, 2014 at 7:23 PM, Donna Dierker 
  donna.dier...@sbcglobal.net wrote:
  At the end of the TFCE processing, you should get a text file named 
  something like *report.txt, along with a *ignif*metric.  If you don't, then 
  it's not finishing normally.
 
  How many iterations are you using?  If it doesn't finish overnight with 5k 
  iterations, then it might be your java runtime engine.  Before I started 
  using this one, my java runtime engine just hung or grinded to a near halt 
  after a few thousand iterations:
 
  http://brainmap.wustl.edu/pub/donna/US/WVU/linux_java.zip
  login pub
  password download
 
  You can use others; just be aware that the JRE can be an obstacle.
 
 
  On Jan 15, 2014, at 3:32 PM, Eshita Shah eshs...@ucla.edu wrote:
 
   Hi Donna,
  
   I am running a script you sent me a while ago, which uses caret_stats to 
   run ANOVA and then TFCE for significant cluster analysis. I am noticing 
   that the output files generated after running the script do not match up 
   to the expected outputs. I am seeing files such as .metric.data1, 
   .metric.data2, etc. I'm not sure where they are coming from - my hunch 
   is that the script is being aborted at some point (because I am not 
   getting any of the TFCE outputs), but I'm not exactly sure why these 
   files would be generated or what they are.
  
   Please let me know if you have any ideas.
  
   Thank you,
   Eshita
  
   --
   Eshita Shah
   University of California, Los Angeles | 2014
   B.S. Neuroscience
   eshs...@ucla.edu
  
  
   ___
   caret-users mailing list
   caret-users@brainvis.wustl.edu
   http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 
 -- 
 Eshita Shah
 University of California, Los Angeles | 2014
 B.S. Neuroscience 
 eshs...@ucla.edu 
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Using caret_stats for TFCE

2014-01-22 Thread Donna Dierker
How many iterations did you specify?  Have you tried it with 1000 iterations?


On Jan 22, 2014, at 3:09 PM, Eshita Shah eshs...@ucla.edu wrote:

 Hi Donna, 
 
 I have been trying to allocate more memory to caret_stats so it can run java 
 properly without crashing. However, this has not worked. I just downloaded 
 the JRE you provided, and it is actually the same one that I am using. 
 
 Do you think there is another source of this problem? 
 
 Thank you, 
 Eshita 
 
 
 On Wed, Jan 15, 2014 at 7:23 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 At the end of the TFCE processing, you should get a text file named something 
 like *report.txt, along with a *ignif*metric.  If you don't, then it's not 
 finishing normally.
 
 How many iterations are you using?  If it doesn't finish overnight with 5k 
 iterations, then it might be your java runtime engine.  Before I started 
 using this one, my java runtime engine just hung or grinded to a near halt 
 after a few thousand iterations:
 
 http://brainmap.wustl.edu/pub/donna/US/WVU/linux_java.zip
 login pub
 password download
 
 You can use others; just be aware that the JRE can be an obstacle.
 
 
 On Jan 15, 2014, at 3:32 PM, Eshita Shah eshs...@ucla.edu wrote:
 
  Hi Donna,
 
  I am running a script you sent me a while ago, which uses caret_stats to 
  run ANOVA and then TFCE for significant cluster analysis. I am noticing 
  that the output files generated after running the script do not match up to 
  the expected outputs. I am seeing files such as .metric.data1, 
  .metric.data2, etc. I'm not sure where they are coming from - my hunch is 
  that the script is being aborted at some point (because I am not getting 
  any of the TFCE outputs), but I'm not exactly sure why these files would be 
  generated or what they are.
 
  Please let me know if you have any ideas.
 
  Thank you,
  Eshita
 
  --
  Eshita Shah
  University of California, Los Angeles | 2014
  B.S. Neuroscience
  eshs...@ucla.edu
 
 
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] only got one hemisphere from surefit

2014-01-20 Thread Donna Dierker
SureFit only segments one hemisphere at a time.  You need to crop to a left or 
right hemisphere and run them separately.


On Jan 20, 2014, at 3:36 PM, Cheng, Hu huch...@indiana.edu wrote:

 Dear Caret User,
 
 I’m running Caret on a 64-bit Windows. I tried to segment an individual’s T1w 
 anatomy using surefit. I set the origin at AC and select “both” in structure. 
  However, I only got left hemisphere segmented. The inflated surface is only 
 half of the brain. What did I do wrong?
 Thanks for your help!
 
 Hu Cheng, Ph.D., DABMP
 MRI Physicist, Imaging Research Facility
 Department of Psychological and Brain Sciences
 Indiana University
 Bloomington, IN 47405
 Tel. 812-856-2518
 Fax. 812-855-4691
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] freesurfer to caret piepeline

2013-12-03 Thread Donna Dierker
Ugh -- you're not getting any breaks, are you?

caret5 has some options at launch:

 -style  style-name
Set the user-interface style where style-name is one of:
   Windows
   Motif
   CDE
   Plastique
   GTK+
   Cleanlooks

See if entering caret5 -style Cleanlooks works better.


On Dec 3, 2013, at 1:36 PM, Taosheng Liu ts...@msu.edu wrote:

 Hi Donna,
  The older version of caret_command works fine, in both preborder and 
 postborder, but when we need to adjust the landmarks using caret5's GUI, we 
 got this error:
 
 Gtk-CRITICAL **: IA__gtk_widget_style_get: assertion `GTK_IS_WIDGET (widget)' 
 failed
 
  The computer then pretty much freezes. It seems there's something wrong with 
 X-windows? We're running 64bit Linux (Ubuntu). Do you have any suggestions? I 
 couldn't find any obvious solution by searching the web. The new caret5 GUI 
 works just fine.
  Thank you,
 
 --ts
 
 
 On 11/27/2013 05:18 PM, Donna Dierker wrote:
 Good question.  You could use different versions for the two steps, but 
 you'd have to report that in your methods, which could get messy.  Running 
 pre border.sh with the older caret_command probably wouldn't take that long, 
 but if you tweaked your borders, it could overwrite them, depending on what 
 you used for the borderproj update string (e.g., updated).
 
 To get around that, you could tar/zip just your tweaked borderproj files; 
 re-run preborder.sh; unzip the preserved tweaked borderproj; and run post 
 border.sh.
 
 
 On Nov 27, 2013, at 2:01 PM, Taosheng Liu ts...@msu.edu wrote:
 
 Thanks for the tip. Just another question, if we switch to the older 
 version, do you think we should re-run the preborder script as well? or we 
 can pick up from where preborder has ended and just run postborder with the 
 older caret_command?
 Thank you,
 
 --ts
 
 On 11/27/2013 02:33 PM, Donna Dierker wrote:
 I agree, the check_reg captures look great to me.  The 144 cycles are 
 troubling.  In your shoes, I'd revert.
 
 
 On Nov 27, 2013, at 10:41 AM, Taosheng Liu ts...@msu.edu wrote:
 
 Hi Donna,
  Thanks for testing this, and for your fast response. Yes we're not 
 getting a core dump and it does finish, although with 144 cycles. Are you 
 saying we should switch back to the old caret for the time being? I mean, 
 the check_reg result seems fine, but maybe there's still something wrong 
 with it?
  Thank you!
 
 --ts
 
 
 
 On 11/27/2013 11:27 AM, Donna Dierker wrote:
 Hi Taosheng,
 
 I tried running postborder.sh on a dataset I have here, and I got a core 
 dump on the offending line:
 
 + caret_command -surface-sphere-multi-morph Human.SAIS_018.L.73730.spec 
 deformed_Human.SAIS_018.L.Midthickness_711-2B.mws.coord 
 Human.SAIS_018.L.SPHERICAL.73730.MWS.coord 
 Human.SAIS_018.L.CLOSED.73730.topo
 ./PALS_B12.LR/postborder.sh: line 50: 27257 Segmentation fault  
 (core dumped) caret_command -surface-sphere-multi-morph $SPEC 
 $FIDUCIAL_MWS $SPHERE_MWS $TOPO
 
 You're not getting a core dump, but something else is clearly going awry 
 with that line.
 
 I tried backtracking to a June 2011 vintage caret, and it completed with 
 no problems.  Here is the version I used:
 
 http://brainvis.wustl.edu/pub/caret/caret_distribution_Linux64.v5.64.zip
 login pub
 password download
 
 Here are the logs for the for the two versions of caret (bad and good):
 
 Problem log:
  http://brainmap.wustl.edu/pub/donna/US/MI/postborder.log
 Good log:
  http://brainvis.wustl.edu/pub/donna/US/MI/postborder.201106.log
 
 I'm not sure when caret_command will be fixed, but I will pass the 
 dataset and details on to the developers.
 
 Donna
 
 
 On Nov 26, 2013, at 11:58 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 
 Hi Taosheng,
 
 I am going to try postborder.sh on my new Linux box and see if it does 
 the same thing.
 
 I don't see any candidate culprits in my ~/.caret5_preferences file.
 
 No one else has reported this issue.  More users are moving to the 
 fs_LR pipeline:
 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/Freesurfer_to_fs_LR
 
 There are trade-offs which are discussed here:
 
 http://cercor.oxfordjournals.org/content/early/2011/11/02/cercor.bhr291.full.pdf+html
 
 One big plus is that there is no border tweaking.
 
 I'll let you know how my post border.sh trial comes out, or if I need 
 sample data.  But I think I have plenty of sample data right here.  I 
 can't imagine what might be unique about our data that would cause this.
 
 Donna
 
 
 On Nov 26, 2013, at 8:22 AM, Taosheng Liu ts...@msu.edu wrote:
 
 Hi Donna,
 Yes I can see that line of code in the script. Have you other others 
 on this list run this command lately? I wonder if you also see this 
 many cycles. If not, I wonder if there is some global setting I should 
 set, or something is off with my data. I can certainly supply the data 
 if needed.
 Thank you,
 
 --Taosheng
 
 
 On 11/25/2013 06:50 PM, Donna Dierker wrote:
 Hmmm.  Your results

Re: [caret-users] Paintfiles in Native Space

2013-12-02 Thread Donna Dierker
If you have opted to run the registration both ways (individual to atlas and 
atlas to individual), you should be able to get the atlas goodies on the native 
mesh.  If they were not in the spec file when you ran the registration, then 
you can apply the deformation map in the native directory to the paint file in 
the atlas directory, specifying the output deformed*paint be written in the 
native directory.


On Dec 2, 2013, at 10:06 AM, Konrad Wagstyl kw...@cam.ac.uk wrote:

 Hi,
 
 I've registered Macaque data using the FS to F99 tutorial in order to use the 
 parcellation schemes.
 I understand this resamples the subject's surface giving a different number 
 and spacing of vertices.
 Is it possible to generate a file with the native space vertices and the 
 atlas region in which each vertex lies?
 
 I have data values for these individual vertices that I would like to compare 
 based on their anatomical region.
 
 Thanks,
 Konrad
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] unable to choose Map To Spec File with Atlas

2013-11-28 Thread Donna Dierker
I have a suspicion:  Somehow, you could have gotten a 
link/shortcut/caret5-binary-itself on your desktop or some other location that 
is displaced from the rest of the caret distribution.  (This happened to me 
recently, which is why it comes to mind.)  In order for caret to see the atlas 
spec files it needs to enable that option, it needs files in 
$CARET_HOME/data_files/fmri_mapping_files, and it deduces $CARET_HOME as the 
parent directory of the caret5 binary used to start caret.

Let's hope this sheds some light on your issue.


On Nov 28, 2013, at 6:16 AM, Eva Hilland evahill...@gmail.com wrote:

 Dear Caret users,
 
 I am trying to map SPM8 functional data to the Caret template. In the GUI for 
 Map Volume(s) to Surface(s)  Spec File and Surface Selection - I can only 
 choose Map To Spec File, and not the Map To Spec File with Atlas.
 
 Any idea why this option doesn´t work for me? It worked fine before.
 
 I am greatful for any tips!
 
 Best,
 Eva
 Screen Shot 2013-11-28 at 
 13.02.45.png___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] freesurfer to caret piepeline

2013-11-27 Thread Donna Dierker
Good question.  You could use different versions for the two steps, but you'd 
have to report that in your methods, which could get messy.  Running pre 
border.sh with the older caret_command probably wouldn't take that long, but if 
you tweaked your borders, it could overwrite them, depending on what you used 
for the borderproj update string (e.g., updated).

To get around that, you could tar/zip just your tweaked borderproj files; 
re-run preborder.sh; unzip the preserved tweaked borderproj; and run post 
border.sh.


On Nov 27, 2013, at 2:01 PM, Taosheng Liu ts...@msu.edu wrote:

 Thanks for the tip. Just another question, if we switch to the older version, 
 do you think we should re-run the preborder script as well? or we can pick up 
 from where preborder has ended and just run postborder with the older 
 caret_command?
 Thank you,
 
 --ts
 
 On 11/27/2013 02:33 PM, Donna Dierker wrote:
 I agree, the check_reg captures look great to me.  The 144 cycles are 
 troubling.  In your shoes, I'd revert.
 
 
 On Nov 27, 2013, at 10:41 AM, Taosheng Liu ts...@msu.edu wrote:
 
 Hi Donna,
  Thanks for testing this, and for your fast response. Yes we're not getting 
 a core dump and it does finish, although with 144 cycles. Are you saying we 
 should switch back to the old caret for the time being? I mean, the 
 check_reg result seems fine, but maybe there's still something wrong with 
 it?
  Thank you!
 
 --ts
 
 
 
 On 11/27/2013 11:27 AM, Donna Dierker wrote:
 Hi Taosheng,
 
 I tried running postborder.sh on a dataset I have here, and I got a core 
 dump on the offending line:
 
 + caret_command -surface-sphere-multi-morph Human.SAIS_018.L.73730.spec 
 deformed_Human.SAIS_018.L.Midthickness_711-2B.mws.coord 
 Human.SAIS_018.L.SPHERICAL.73730.MWS.coord 
 Human.SAIS_018.L.CLOSED.73730.topo
 ./PALS_B12.LR/postborder.sh: line 50: 27257 Segmentation fault  (core 
 dumped) caret_command -surface-sphere-multi-morph $SPEC $FIDUCIAL_MWS 
 $SPHERE_MWS $TOPO
 
 You're not getting a core dump, but something else is clearly going awry 
 with that line.
 
 I tried backtracking to a June 2011 vintage caret, and it completed with 
 no problems.  Here is the version I used:
 
 http://brainvis.wustl.edu/pub/caret/caret_distribution_Linux64.v5.64.zip
 login pub
 password download
 
 Here are the logs for the for the two versions of caret (bad and good):
 
 Problem log:
http://brainmap.wustl.edu/pub/donna/US/MI/postborder.log
 Good log:
http://brainvis.wustl.edu/pub/donna/US/MI/postborder.201106.log
 
 I'm not sure when caret_command will be fixed, but I will pass the dataset 
 and details on to the developers.
 
 Donna
 
 
 On Nov 26, 2013, at 11:58 AM, Donna Dierker do...@brainvis.wustl.edu 
 wrote:
 
 Hi Taosheng,
 
 I am going to try postborder.sh on my new Linux box and see if it does 
 the same thing.
 
 I don't see any candidate culprits in my ~/.caret5_preferences file.
 
 No one else has reported this issue.  More users are moving to the fs_LR 
 pipeline:
 
 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/Freesurfer_to_fs_LR
 
 There are trade-offs which are discussed here:
 
 http://cercor.oxfordjournals.org/content/early/2011/11/02/cercor.bhr291.full.pdf+html
 
 One big plus is that there is no border tweaking.
 
 I'll let you know how my post border.sh trial comes out, or if I need 
 sample data.  But I think I have plenty of sample data right here.  I 
 can't imagine what might be unique about our data that would cause this.
 
 Donna
 
 
 On Nov 26, 2013, at 8:22 AM, Taosheng Liu ts...@msu.edu wrote:
 
 Hi Donna,
 Yes I can see that line of code in the script. Have you other others on 
 this list run this command lately? I wonder if you also see this many 
 cycles. If not, I wonder if there is some global setting I should set, 
 or something is off with my data. I can certainly supply the data if 
 needed.
 Thank you,
 
 --Taosheng
 
 
 On 11/25/2013 06:50 PM, Donna Dierker wrote:
 Hmmm.  Your results look great, but 144 cycles of spherical morphing is 
 certainly not normal.  I think the line in the script that does this is 
 this one:
 
#MULTIRESOLUTION MORPHING
caret_command -surface-sphere-multi-morph $SPEC $FIDUCIAL_MWS 
 $SPHERE_MWS $TOPO
 
 There are no parameters for this one; they are built in, apparently, so 
 I'm not sure what could be going on.
 
 I'm stumped.
 
 
 On Nov 25, 2013, at 4:27 PM, Taosheng Liu ts...@msu.edu wrote:
 
 Hi Donna,
 I've been using the preborder and postborder scripts to convert files 
 from FreeSurfer to Caret (hope you still remember I was involved in 
 early testing of these scripts). Things  have been working great. I 
 haven't done this for a while, but recently I changed computer and 
 installed the current version of Caret and these scripts, and there 
 seems to be some difference in how the code works. I just want to make 
 sure this is expected behavior.
 Specifically, when we run postborder script, it takes much longer. It 
 seems either

Re: [caret-users] Try #2

2013-11-23 Thread Donna Dierker
It is possible the TFCE test found no significant vertices, while the cluster 
method did.

The TFCE generated report *does* list significant clusters near the top.  (Note 
that the TFCE test only establishes the enhanced threshold a vertex must meet 
to be significant.  it doesn't assign clusters as significant the way the 
cluster method does, but if any vertices do meet the significance threshold and 
they form clusters, then caret_stats writes a paint/label file and includes a 
list of them with areas near the top of the report.)

It could look like this:

ColumnThresh  Num-Nodes  Area  Area-Corrected COG-X COG-Y   
  COG-Z   P-Value
 3 0.975   1997   1286.165283 2093.53857451.174   -26.128   
 -1.203
 3 0.975906740.481445  937.84619154.076   -30.397   
 17.204
 3 0.975346373.274231  513.19421439.92210.387   
 10.669
 3 0.975796432.754486  459.65432731.071 8.880   
-12.815
 3 0.975317145.114990  428.37191840.161   -50.331   
 42.466
 3 0.975 13  8.440027   19.83358650.474   -57.879   
 33.487

If you just see the column heads, then TFCE results were negative.


On Nov 22, 2013, at 4:18 PM, Eshita Shah eshs...@ucla.edu wrote:

 I am also wondering how I should interpret the results that are generated by 
 the TFCE script. I know the ANOVA test on caret_command generated some text 
 files noting nodes of significance, but from what I'm seeing, the TFCE script 
 does not do so. How would I use the metric file to see the significant 
 differences that were found? 
 
 Thank you,
 Eshita 
 
 
 On Fri, Nov 22, 2013 at 1:42 PM, Eshita Shah eshs...@ucla.edu wrote:
 Should I be using the Depth or the Smoothed Depth column? Are there 
 significant differences between them?
 
 Also, I haven't been able to properly load any of the paint files onto the 
 fiducial for viewing in caret. I'm wondering how to do that, especially if I 
 want to highlight the areas that are found significantly different between 
 the two groups after statistical analysis. 
 
 Thank you, 
 Eshita 
 
 
 On Thu, Nov 21, 2013 at 4:33 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 I would use the Conte69 for TFCE/cluster area computation purposes.  Think of 
 it as a neutral, unbiased atlas surface.
 
 The topology files only define neighbor relationships, so on a standard mesh, 
 the same topo will work with a variety of configurations that are on that 
 mesh.  The ones I gave you should be fine.
 
 One thing I don't recall talking about is generating composite files of the 
 depth metric/shape files.  (Metric and shape are identical in data format.  
 Metric was intended more for overlay/functional, while surface_shape is 
 intended more for anatomical measures like depth, curvature, thickness, etc.  
 But the metric menu has more features than the surface_shape menu, so I 
 sometimes purposely use metric.  For this purpose, either is fine.)
 
 The ANOVA test wants composite files for each treatment/group (maybe what you 
 meant by factor level.  So at some point you need to generate composite files 
 to concatenate your subjects into one composite per group.
 
 http://brainmap.wustl.edu/pub/donna/FREESURFER/SCRIPTS/2009_10/SCRIPTS/gen_composite_filcav.sh
 login pub
 password download
 
 In that example, Depth was the second of multiple columns per subject.  I 
 don't recall what it is for the fs_LR stream.  But if you run caret_command 
 -metric-information on one of your surface_shape files, you'll find out which 
 column has Depth.
 
 
 On Nov 21, 2013, at 4:58 PM, Eshita Shah eshs...@ucla.edu wrote:
 
  Hi Donna,
 
  Thank you for the files. I seem to be understanding so far what the sample 
  script is doing but I do have a few questions. For the data file input into 
  ANOVA using caret_stats, I notice it's in a different format than in 
  caret_command ANOVA. I just want to clarify that each data file is still a 
  metric file that contains all of the subjects for one factor level. 
  Secondly, I realize I am using the fs_LR average open topo files you 
  provided earlier, but for the average fiducial coordinate file, should I be 
  also using the Conte69 average? I know you pointed out that my data is less 
  comparable to the fs_LR standard mesh data, so I am curious as to whether I 
  should just generate my own average fiducial file and use that instead.
 
  Let me know if I'm heading the right way.
 
  Thanks for all your help,
  Eshita
 
 
 
  On Tue, Nov 19, 2013 at 2:35 PM, Donna Dierker do...@brainvis.wustl.edu 
  wrote:
  Here are the caret_stats and jre zip files:
 
  http://brainvis.wustl.edu/pub/donna/SCRIPTS/caret6.zip
  http://brainvis.wustl.edu/pub/donna/SCRIPTS/linux_java.zip
  login pub
  password download
 
  A sample script that calls TFCE is here:
 
  http://brainmap.wustl.edu/pub/donna/SCRIPTS/SHAPE

Re: [caret-users] Try #2

2013-11-19 Thread Donna Dierker
Hi Eshita,

You don't need to create an average topo of your subjects, because your data is 
on the 164k fs_LR standard mesh, so the open topology files in the link I 
provided below is all you will need to define the neighbor relationships 
between the vertices.

You do need to make a decision or two, though:  The caret_command 
-metric-anova-one-way feature is a valid test, but it requires a 
cluster-forming threshold (e.g., whatever f-stat corresponds to p=.01 or 
p=.025/hem).  It can make a big difference which cluster-forming threshold you 
use, as is described here:

http://www.jneurosci.org/content/suppl/2010/02/12/30.6.2268.DC1/Supplemental_Material.pdf
page 6 and supplementary material figure 7

Instead, we now use Threshold-Free Cluster Enhancement (TFCE), which 
essentially integrates over the whole range f-stats:

http://brainvis.wustl.edu/wiki/index.php/Caret:Documentation:Statistics:TFCE_Implementation

Smith SM, Nichols TE., Threshold-free cluster enhancement: addressing problems 
of smoothing, threshold dependence and localisation in cluster inference. 
Neuroimage. 2009 Jan 1;44(1):83-98. PMID: 18501637 
http://www.ncbi.nlm.nih.gov/pubmed/18501637

Using TFCE requires downloading caret_stats and the java runtime engine (JRE) 
that has been shown to work well with it.  (Some JREs hang or get bogged down.)

These features aren't documented in tutorials, but at least two others have 
managed to get it to work.

If you're fine with the caret_command feature, you should be good to go.

Donna


On Nov 19, 2013, at 12:26 PM, Eshita Shah eshs...@ucla.edu wrote:

 Hi Donna, 
 
 The above script was helpful, thanks. My main concern now is to run the ANOVA 
 test (using caret_command -metric-anova-one-way). You stated earlier that I 
 don't need to worry about the open topo file, but to input into ANOVA should 
 I be creating an average topo file of all my subjects? 
 
 Please let me know. Thank you for your patience and help. 
 
 Eshita 
 
 
 On Fri, Nov 15, 2013 at 3:50 PM, Donna Dierker donna.dier...@sbcglobal.net 
 wrote:
 Scroll down eshita
 
 From: mailer-dae...@yahoo.com mailer-dae...@yahoo.com; 
 To: donna.dier...@sbcglobal.net; 
 Subject: Failure Notice 
 Sent: Fri, Nov 15, 2013 10:41:08 PM 
 
 Sorry, we were unable to deliver your message to the following address.
 
 caret-users@brainvis.wustl.edu:
 No MX or A records for brainvis.wustl.edu
 
 --- Below this line is a copy of the message.
 
 Received: from [216.39.60.175] by nm11.access.bullet.mail.gq1.yahoo.com with 
 NNFMP; 15 Nov 2013 21:15:13 -
 Received: from [98.138.104.99] by tm11.access.bullet.mail.gq1.yahoo.com with 
 NNFMP; 15 Nov 2013 21:15:12 -
 Received: from [127.0.0.1] by smtp119.sbc.mail.ne1.yahoo.com with NNFMP; 15 
 Nov 2013 21:15:12 -
 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sbcglobal.net; 
 s=s1024; t=1384550112; bh=NtxBzru9khKDAsba5SAlV4YrRgVkHeW8b9YGv0bMztA=; 
 h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:References:In-Reply-To:Content-Type;
  
 b=WV5yG6d6wYHiVVhb2EQHyNRXQWxW1LesfyqTNaXOXmqqqOvPIFG2YS//Ij/FdfTPJj/2vVds/n4M6IksP/0A0F9p1DFQ0f99NlI5Kdnig45dD3sfU7lcXCOg4yTSnjFCUOwFbOKNDdhbE5qw7rGSY2mkoTbXduJwrvIHu6fC/LQ=
 X-Yahoo-Newman-Id: 757492.48204...@smtp119.sbc.mail.ne1.yahoo.com
 X-Yahoo-Newman-Property: ymail-3
 X-YMail-OSG: gXcHylQVM1l09L.PSsdYMzRKwLwkv.NYwqaMp.6BrpjiXNE
 ho6UQJJXjmXuAKMCqpRBD2pMHNRD6C6IljqIBAI6R6Qm8xhM3bVHels3.Fm5
 w8b7Ond.bbI2YmxgKPd57rAIo6ok.Q.vhp4ZhM8s_TaTPWlswXpD2yAlcLHq
 1J3g4GvdFzgSgJ_YzgaHMEiNZaUTqjMiAsBZ30klPBT.yrdrNl9W_9TShNiA
 bCG4r9u36LVWGZHlQVxynSOJXA8ldy_K2eYACxDrpigxbeyqkL30yLrOmtQv
 rdfyk.fCTiiBI6TOCO_yOj.NPnYttzBTJEhvKwTNrhoIs3t6QxOjFKZI.zOl
 js7LRoCYbirS1mueqpF25Kz_lMdVsB3O6ofotbtALNwdi18tELGjU33pq6Hy
 .tmvnVoOH1Wy9dd9Gm1O.j2DtcMH1OWMIHROL8lAhs8hGhffYi3T7YY33LIh
 ujuoDMqy1hr5D4XI96NYpViITdei1lS_V51d_uov235Mw5xaWhCVuLwNwG7Q
 N_saCzXaf8DGq1fTdNgz2LfRRAdnh6yuEkB57kTF4BjRG2YSYgnHiPnfABCY
 45DWmSS4cDvTRob2HbfUDsx5nwS0t6N4joyIUQ3I2_gz2502fqOnNC0h4mDj
 cZAi9sdtFy2QMAaFYDLUg62LXsFSlpCxY0gvSmu3MlUoZLuw9wFN0IDHOKBl
 YK9SPRb7erKBkS2zi1POQoQOpB2iyoWJsF7_XnF6H.LEnzp0BfxjMvCm0.o9
 pdWNfeuoKF3UbR5wwnXXz2LY7okbApoTG7UK_gUrsqG_aLea.qRDsFi5INWP
 uOFJfU1pSVKYeCpn7IpEVN7BaeeVww6BI0Iwanc3H0wJtHwMm8HgCUIb4gzL
 S8Bls2IYSlA0OYGW_
 X-Yahoo-SMTP: q5QnzDOswBAGQX7OHacFHTX9.UGNm04EhVsFT8nwx8VksFe0qzkFUA--
 X-Rocket-Received: from [128.252.37.103] (donna.dierker@128.252.37.103 with )
 by smtp119.sbc.mail.ne1.yahoo.com with SMTP; 15 Nov 2013 21:15:12 
 + UTC
 Message-ID: 52868ee3.3030...@sbcglobal.net
 Date: Fri, 15 Nov 2013 15:15:15 -0600
 From: Donna Dierker donna.dier...@sbcglobal.net
 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 
 Thunderbird/24.1.0
 MIME-Version: 1.0
 To: Caret, SureFit, and SuMS software users caret-users@brainvis.wustl.edu
 Subject: Re: [caret-users] Different Node Numbers
 References

[caret-users] Try #2

2013-11-15 Thread Donna Dierker
Scroll down eshita___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Reconstruct into surface command line

2013-11-14 Thread Donna Dierker
I doubt it's what you want, but it's the closest I could find:

  SYNOPSIS for importing contour files 
-file-convert -ic  CONTOUR_TYPE input-file-name 
   caret-contour-file-name [caret-contour-cell-file-name] 
  
 DESCRIPTION for Importing Contour Files 
  
CONTOUR_TYPEType of Contour File

   MDPLOT   MDPLOT .mdo contour file
   NEURONeurolucida XML contour file
  
The caret-contour-cell-file-name is optional.  If 
there are no contour cells, it is not written.

It just imports the contours -- doesn't reconstruct into surface, from my read.


On Nov 13, 2013, at 8:24 PM, Tristan Chaplin tristan.chap...@gmail.com wrote:

 Hi,
 
 Does the caret have a command line function to reconstruct a surface from 
 contours, equivalent to the GUI: Layers-Reconstruct into Surface? I can't 
 seem to find it.
 
 Cheers,
 Tristan
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Different Node Numbers

2013-11-13 Thread Donna Dierker
Hi Eshita,

I always use the open topology for this purpose (i.e., excludes only medial 
wall vertices).  The files here will be helpful:

http://brainmap.wustl.edu/pub/donna/FREESURFER/SCRIPTS/2013_11/TFCE_164k/
login pub
password download

You can get them all in this zip file:

http://brainmap.wustl.edu/pub/donna/FREESURFER/SCRIPTS/2013_11/TFCE_164k.zip

Just to explain what is going on, the areas in the TFCE/cluster computations 
are computed on the Conte69 mean mid thickness, with the open topo file 
(excluding medial wall).  The distortion maps pump up the areal value where 
substantial smoothing occurs as a result of averaging individuals' coordinate 
files (e.g., high 3D variability).  The intent is to make the areas more like 
an individual's area would be in that region.  For folks not attuned to what 
you are doing, this is during group analysis.

Cheers,

Donna


On Nov 12, 2013, at 3:18 PM, Eshita Shah eshs...@ucla.edu wrote:

 Hello, 
 
 Thanks for your input. I successfully was able to use freesurfer_to_fs_LR 
 Pipeline to import my FreeSurfer files into caret, however when I try running 
 ANOVA, it asks for certain files that have not been generated by the 
 pipeline. Specifically, how do I generate the distortion-metric-shape-file 
 that is being asked for? Lastly, is the .topo file that is generated via the 
 pipeline the open topo file or closed? Previously I was able generate the 
 closed topo file, so I'm not sure if the freesurfer_to_fs_LR pipeline does 
 the same. The parameter required for the ANOVA analysis is the open topo 
 file. 
 
 Thank you, 
 Eshita Shah 
 
 
 On Thu, Nov 7, 2013 at 4:12 PM, Rouhollah Abdollahi roohy...@gmail.com 
 wrote:
 Hi
 Actually the code import the original data from freesurfer to caret then 
 automatically you will have different node number for different subjects and 
 hemispheres. To have the same mesh you can use Freesurfer_to_fs_LR Pipeline 
 which is available in the caret website. It imports all the data to the same 
 mesh which here is fs_LR mesh.
 Hope it helps
 Best
 Rouhi
 
 On Nov 8, 2013 12:06 AM, Eshita Shah eshs...@ucla.edu wrote:
 Hello, 
 
 I have just recently started using Caret, and I am running the 
 freesurfer2caret.sh script in order to import my FreeSurfer files into Caret 
 as well as generate sulcal depth for all subjects. I tried doing a One-Way 
 ANOVA test, but I've realized that the number of nodes in the 
 metric/surface_shape files for the two subjects are different. How is it 
 possible that the same script is creating files with different node numbers? 
 Also, within each subject, sometimes the node numbers for the left and right 
 hemisphere are different as well. How can I resolve this issue so I can 
 successfully run ANOVA on my subjects? 
 
 Any help would be appreciated. 
 
 Thank you, 
 Eshita Shah 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Hi,

2013-11-08 Thread Donna Dierker
This could happen easily if you are using Caret5 and the same 
visualization specification for both hemispheres.  If you have both 
metric files loaded, but the same metric overlay is set to apply to All 
surfaces (see top of metric selection page selection on D/C menu).


Are you mapping to PALS_B12 or fs_LR?  September 2006 tutorial has 
separate LEFT and RIGHT standard scenes visualization specifications.  I 
confess I still use them, to keep myself un-confused.


If you do have both the LEFT and RIGHT mapping metric columns loaded in 
the same session, try toggling from one to the other.  Are they still 
identical?  (Alternatively, you can press the H button on the D/C metric 
selection menu for each column, and see if they are identical.)  If they 
really are identical, then something went wrong during the mapping 
(target surface selection, possibly).


But let's start with these checks before building a whole tree of 
possibilities. ;-)



On 11/08/2013 10:33 AM, ??? wrote:
   I have generated two sample t-test results spmT_0001.hdr/img, and i 
tried to map it to the surface. The result was strange, the left and 
the right hemisphere were activated completely the same. In fact, the 
left and the right hemisphere were activated different. How can i  get 
the correct results when the left and the right hemisphere were 
activated differently?


--
Sincerely
Longfei Su
Ph. D. Candidate,
College of Mechatronics and Automation,
National University of Defense Technology,
Add: No.47 Yanwachizheng Street,
Changsha, Hunan, 410073, China, PR


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] thresholding issues in Caret

2013-10-25 Thread Donna Dierker
One quote from Poldrack et al. before I hop off the soapbox:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2287206/
Finally, while thresholds must account for the multiplicity of tests, we do 
encourage authors to make available unthresholded statistic and effect size 
images in order to display the whole range of effects in the data, including 
those that do not reach significance. These maps also make it easier to compare 
effect sizes across studies and increase the options for future meta-analyses.


On Oct 24, 2013, at 7:23 PM, Michael Cohen mcoh...@ucla.edu wrote:

 Thanks for the reply--this is an interesting suggestion, and it does seem 
 like one way to accomplish what I am trying to do.  However, it also seems 
 like a fairly time-consuming solution, especially given that we're just 
 trying to get a good visualization of data that we've already analyzed 
 elsewhere.  

It's not -- especially if you script it, which I could help with if you want at 
some point.  But I get that you want to get your results out sooner rather than 
later.

 So, I'm just curious to see whether you (or anybody else on the list) might 
 have insight into the original questions that I had asked?  Barring any 
 additional guidance, I think we may just use the interpolated voxel algorithm 
 on the cluster-thresholded FSL data, without any additional thresholding in 
 Caret.  But I just would like to make sure that this is a reasonable 
 approach--since if there's a setting or two that we should tweak to get an 
 image that more accurately represents the data, it would be great to know 
 that before we submit these figures for publication.

I think the only strict contraindication for interpolated voxel are cases 
analogous to where you'd use nearest-neighbor algorithms in volume-land (e.g., 
label/ROI/parcel volume).

I think using interpolated voxel is fine, even with your thresholded image.  
Sure, you'll fade a bit at the edges, but if this is a concern, use enclosing 
voxel.  Whether it's a concern depends on the nature and extent of the mapped 
data.  In most cases, I doubt it will be a concern.

My take anyway.

 
 Thanks,
 Michael
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] thresholding issues in Caret

2013-10-24 Thread Donna Dierker
If I understand you, my inclination would be to map the unthresholded 
volume with interpolated voxel.


Then map the thresholded volume with enclosing voxel.  Then use Surface: 
ROI to select metric =mythresh and draw an areal border around the 
resulting cluster.  Then render that border over your unthresholded 
metric, to delimit the significant from not.



On 10/24/2013 11:27 AM, Michael Cohen wrote:

Hi,
I was wondering if you might be able to help clarify some uncertainty 
that I've had with thresholding images in Caret.


I am using thresholded volumetric images from FSL, and mapping them 
into Caret. I did see in an old post from the mailing list that this 
is a common approach, which was good to see.  But there are still a 
few things that I'm uncertain about:


-- Is it OK to use the interpolated voxel algorithm with 
pre-thresholded images?  It seems to me that this could be 
problematic, since it's averaging zeros in with the data.  But if I 
use the unthresholded images, I'd have to apply cluster correction in 
Caret, rather than having FSL do it...which introduces other 
complications.


-- Given that the interpolation yields a number of areas in which the 
actual z score in Caret is less than our original z threshold, is it 
advisable to re-threshold the image using Caret display options (under 
Metric Settings) to eliminate all activations under our original 
threshold (in this case, z = 2.3)?  That seems to show much less 
activity on the brain, which could be good or bad, I guess, but the 
more important question is whether it's a more or less faithful 
rendering of the original volumetric image, which I'm not sure about.


-- Is there some trick to setting a volume threshold when you import 
data to Caret using the Map volume to surface option?  I tried 
enabling the volume threshold, and entered a number to use as a 
threshold that was higher than the original threshold used on the 
image file.  But when it imported the image, that option didn't seem 
to do anything at all.  Is there something else that I need to do?


Thanks,
Michael

--
Michael S. Cohen, M.S., C. Phil
Ph.D. Candidate
Department of Psychology
University of California, Los Angeles
Los Angeles, CA 90095


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] The t value in the color bar of the MFM map

2013-10-11 Thread Donna Dierker
MFM, because it averages the intersection of the 12 PALS subjects, will appear 
more smoothed than AFM.  The range is likely to shrink.

Also, if you mean the range of your volume is wider than the range of either 
AFM or MFM, then that could mean some of the extrema in your volume are outside 
the cortical ribbon.

But you say original 2D map -- not sure what you mean.  Doesn't sound like a 
volume.

Just know that on Metric settings, you can adjust the user min/max to whatever 
you want.  That gap at zero is part of the palette you selected.  There is a 
version of that palette without the gap.  You can try other palettes, too -- 
see D/C: Metric Settings.  I often set the scale min/max to be equivalent, 
based on my degrees of freedom, to p=.025/hem or p=.01.  It's your call how to 
set the min/max.


On Oct 10, 2013, at 10:31 PM, Yangmei Luo yangmei...@gmail.com wrote:

 Hi CARET experts,
 
 I would like to use CARET to present my two-sample t test activation results. 
 I mapped the results using multi-fiducial mapping (MFM), However, I found the 
 t value of color bar in the 3D map produced by CARET is lower than my 
 original 2D map. For example, the higher t value in original 2D map is 4.3 
 ,but the 3D MFM map is 1.6. So why the t value in the CARET is much lower? 
 Also, I can not figure out what's the mean of zero in the middle of the color 
 bar?  
 
 Screen Shot 2013-10-11 at 11.31.09 AM.png
 
 Thank you very much in advance! Any help is highly appreciated. 
 
 All the best,
 Yangmei Luo
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Missing Quick Start Files

2013-10-09 Thread Donna Dierker
Victor,

First, I want to make sure you want to learn to use the old caret5 program -- 
not the new Caret workbench that is used for Human Connectome Project.  If you 
want to view HCP data, you can download workbench here:

http://www.humanconnectome.org/connectome/connectome-workbench.html

But if you really do want caret5, then you might be better off with the 
September 2006 tutorial anyway:

CARET_TUTORIAL_SEPT06.zip
http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=6595030
Caret_Tutorial_Oct06.pdf
http://sumsdb.wustl.edu/sums/archivelist.do?archive_id=6602379

It takes longer, but you may not need everything.  Section 5.1 covers the most 
popular feature.  I don't think either covers less used features like editing 
segmentations.

As for the quick start tutorial, I get the same result when logged in as donna, 
so Van Essen Lab permissions don't solve the problem.

The spec file says this:

volume_functional_file BURTON_04_VibroTactile_SIGHTED+orig.HEAD 
BURTON_04_VibroTactile_SIGHTED+orig.BRIK.gz
volume_anatomy_file MeanBuckner12_AFNI+tlrc.HEAD .
CLOSEDtopo_file Human.sphere_6.RIGHT_HEM.73730.topo .
OPENtopo_file Human.sphere_6.OPEN.73730.topo .
CUTtopo_file Human.PALS_B12.RIGHT.CUTS.CartSTD.73730.topo .
FIDUCIALcoord_file 
Human.PALS_B12.RIGHT_AVG_B1-12.FIDUCIAL_AFNI.clean.73730.coord .
INFLATEDcoord_file Human.PALS_B12.RIGHT_AVG_B1-12.INFLATED.clean.73730.coord .
VERY_INFLATEDcoord_file 
Human.PALS_B12.RIGHT_AVG_B1-12.VERY_INFLATED.clean.73730.coord .
FLATcoord_file Human.PALS_B12.RIGHT.FLAT.CartSTD.73730.coord .
scene_file Human.PALS-B12.R.MCW.scene .
paint_file Human.PALS_B12.BOTH.COMPOSITE_Quickstart.73730.paint .
area_color_file Human.Cerebral.COMPOSITE_OrbFrontDistinct_WS_MWblack.areacolor .
border_color_file Human.Cerebral.COMPOSITE.bordercolor .
borderproj_file PALS_B12.LR.BOTH.COMPOSITE_AREAS.73730.borderproj .
borderproj_file PALS_B12.LR.RIGHT.MODALITIES.73730.borderproj .
surface_shape_file 
Human.PALS_B12.LR.B_1-12.BOTH-DEPTHnr_AVG_StdDev_3D-Variability.73730.surface_shape
 .
foci_color_file JN_97-03.focicolor .
fociproj_file Human.PALS-B12.BOTH.JN97_03_With-Classes.73730.fociproj .
study_metadata_file Human.PALS-B12.LR.FOCI_JN97-03.study .
vocabulary_file Human.CorticalAreas.vocabulary .
vocabulary_file Human.Geography.vocabulary .

Donna


On Oct 9, 2013, at 10:09 AM, Donna Dierker do...@brainvis.wustl.edu wrote:

 You're right.  When I download this zip file:
 
 CARET_QUICK_START_JUL2011
 http://sumsdb.wustl.edu/sums/directory.do?id=8286536
 
 ... 
 
 These files are in 
 CARET_QUICK-START_PALS_B12.RIGHT.73730/__MACOSX/CARET_QUICK_START_2011/CARET_QUICK-START_PALS_B12.RIGHT.73730:
 
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._BURTON_04_VibroTactile_SIGHTED+orig.BRIK.gz
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._BURTON_04_VibroTactile_SIGHTED+orig.HEAD
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._CARET_QUICK-START_PALS_B12.RIGHT.73730.spec
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.Cerebral.COMPOSITE.bordercolor
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.Cerebral.COMPOSITE_OrbFrontDistinct_WS_MWblack.areacolor
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 ._Human.CorticalAreas.vocabulary
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 ._Human.Geography.vocabulary
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.PALS-B12.BOTH.JN97_03_With-Classes.73730.fociproj
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.PALS-B12.LR.FOCI_JN97-03.study
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 ._Human.PALS-B12.R.MCW.scene
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.PALS_B12.BOTH.COMPOSITE_Quickstart.73730.paint
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.PALS_B12.LR.B_1-12.BOTH-DEPTHnr_AVG_StdDev_3D-Variability.73730.surface_shape
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.PALS_B12.RIGHT.CUTS.CartSTD.73730.topo
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.PALS_B12.RIGHT.FLAT.CartSTD.73730.coord
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.PALS_B12.RIGHT_AVG_B1-12.INFLATED.clean.73730.coord
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.PALS_B12.RIGHT_AVG_B1-12.VERY_INFLATED.clean.73730.coord
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 ._Human.sphere_6.OPEN.73730.topo
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._Human.sphere_6.RIGHT_HEM.73730.topo
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 ._JN_97-03.focicolor
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 ._MeanBuckner12_AFNI+tlrc.BRIK.gz
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 ._MeanBuckner12_AFNI+tlrc.HEAD
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._PALS_B12.LR.BOTH.COMPOSITE_AREAS.73730.borderproj
 -rw-r--r-- 1 donna velab  240 Aug  7  2011 
 ._PALS_B12.LR.RIGHT.MODALITIES.73730.borderproj
 
 But these are not the real files (all file sizes are 240).  MacOSX uses them 
 for something.
 
 Doing a find on the extracted directory tree shows little besides this and 
 nested subdirs.
 
 I wonder if the permissions on these files changed

Re: [caret-users] Voxel editing slowdown?

2013-10-04 Thread Donna Dierker
Another note from Nicky Daniels that might help someone else:  We want to 
upgrade a couple of our computers and were testing different graphic cards to 
check if they would solve the issue. The Nvidia card described by other users 
worked fine, as well as the AMD model C326 and B304.


On Aug 12, 2013, at 6:57 AM, Schellekens, W. w.schellek...@umcutrecht.nl 
wrote:

 Hi,
 
 Sorry, I should have mentioned this in my previous e-mail, but I am running 
 the most recent drivers of the HD4000 graphics card. Furthermore, I'm also 
 running Ubuntu 13.04 (x64) on my current system (dual boot, so no virtual box 
 or anything), and there I experience the exact same thing (maybe even worse). 
 When using the HD4000 card, Caret v5.65 just hangs as soon as I click on the 
 surface or edit voxels. However, when I switch to the Nvidia card, Caret runs 
 fine again. So in my case, the problem occurs both at windows 7 and Ubuntu 
 13.04. Let me know, if I can be of any further help.
 Cheers,
 
 Wouter Schellekens
 --
 Brain Center Rudolf Magnus
 Neurology  Neurosurgery, Brain Division, UMC Utrecht
 Postal address: Str. 4.205, Postbus 85060, 3508 AB, Utrecht
 Visit: Str. 4.115, Universiteitsweg 100, 3584 CG, Utrecht
 E-mail: w.schellek...@umcutrecht.nl
 
 
 Van: caret-users-boun...@brainvis.wustl.edu 
 [caret-users-boun...@brainvis.wustl.edu] namens Donna Dierker 
 [do...@brainvis.wustl.edu]
 Verzonden: vrijdag 9 augustus 2013 16:58
 Aan: Caret, SureFit, and SuMS software users
 Onderwerp: Re: [caret-users] Voxel editing slowdown?
 
 There have been some off-list messages on this thread, and I wanted to share 
 this:
 
 Tim Coalson wrote:
 ... it seems that the previous generation on-CPU graphics didn't have this 
 problem.  The newer generation having this problem with the same code seems 
 like the driver might be the culprit - ensure you have the latest graphics 
 drivers.  If it still occurs, perhaps we could contact intel and let them 
 know of the problem.  They might just brush it off (caret5 is using an old 
 version of openGL), but you never know.
 
 Jon Schindler wrote:
 As Tim says, one factor is that they are underpowered compared to discrete 
 graphics cards.  However, another factor is that since Intel's attempt to 
 make a GPU is relatively recent, and they aren't trying to support high-end 
 legacy applications (professional OpenGL applications), they may not have 
 optimized the version of OpenGL that we use, which is fairly outdated, and 
 used primarily in expensive legacy OpenGL applications.  Most consumer 
 OpenGL apps, which are Intel's target market, tend not to require extremely 
 fast OpenGL 2.x support (as they've moved on to OpenGL 3.x+), so we may be 
 witnessing some slowdowns due to the version of OpenGL we are using.
 
 
 Make sure your graphics card driver is up-to-date.  It would be nice to know 
 whether that helps any of the three we know experiencing this problem.  
 (Also, whether anyone is seeing it on a platform other than windows.)
 
 
 On Aug 7, 2013, at 11:45 AM, Donna Dierker do...@brainvis.wustl.edu wrote:
 
 I have emailed nicky, the first one who reported this issue, to see what 
 graphics card she was using when she ran into this.
 
 Hopefully Steven is still tuned into this thread.
 
 This information is most helpful, Wouter.  Thank you much for posting these 
 details.
 
 
 On Aug 6, 2013, at 12:51 PM, Schellekens, W. w.schellek...@umcutrecht.nl 
 wrote:
 
 Hi,
 
 I'd like to contribute to this topic, although I'm not sure of what use 
 it'll be...
 
 I'm experiencing the slowdown as well on a new windows 7 64bit machine. But 
 I may have found a reason why.
 
 I used to be able to use Caret v5.65 on my older windows 7 64bit laptop 
 just fine. The main difference between that older laptop and my current one 
 is the processor. The older one was a 2nd gen intel i3, and the current one 
 is a 3th gen intel i7. Except for having higher clock rate now, something 
 else has changed as well: the onboard graphics processor from HD3000 to 
 HD4000. In addition to the onboard intel graphics chip, I also have a 
 low-end Nvidia Geforce 610M graphics chip, which performance is quite 
 similar to the HD4000.
 
 Now, I still have the exact copy of Caret v5.65 I installed on the 2nd gen 
 i3, and installed that one on my current system. When I'm using the HD4000 
 graphics chip, Caret runs horrendously slow (editing voxels, node IDs, 
 right mouse button, etc.). However, when I change to the Nvidia chip, Caret 
 runs fine. My best guess is, that Caret and the HD4000 chip don't play nice 
 together.
 
 It would be interesting to see, if the other guys that experience slowdowns 
 are using the HD4000 chip as well.
 Hopefully, this has helped someone.
 Cheers,
 
 Wouter Schellekens

Re: [caret-users] error: A metric file must be provided for metric mapping

2013-08-31 Thread Donna Dierker
Weird.  I'm curious:  Have you tried loading another, existing metric file 
before mapping (e.g., one that comes with the tutorial, or one you have mapped 
previously)?  It shouldn't matter, but I have seen other features under 
Surface: Measurements fail, if an existing surface shape file was not already 
loaded (e.g., like it needed some sort of priming column to exist).

This is not what the error implies, and it should not be necessary.  But I have 
come across this behavior myself, and I don't understand what triggers it.  I'm 
just suggesting easy things to try, in hopes that a quick-fix is found.

The only other suggestion that comes to mind is trying options similar to the 
one you want.  For example, mapping to Caret, rather than to a spec with atlas, 
may not be as easy, but it might be more reliable.  You can find the SPM target 
surface in /usr/local/caret/data_files/fmri_mapping_files/*AVG*SPM*coord.  Load 
it in caret; copy to your directory; and map directly.  Save the metric.  See 
if you can get it to work that way, because mapping to spec file writes 
directly to the metric file, while mapping to caret just stores it in memory; 
you have to explicitly file: save as metric to save the results.  More work, 
but perhaps fewer obstacles. ;-)


On Aug 31, 2013, at 8:02 AM, tony han wrote:

 Hi, I'm trying overlay t maps created by spm. I've gone through this many 
 times without error. However today I get the error: A metric file must be 
 provided for metric mapping. It's pretty weird. I search through the archive 
 and find that the only one who mentioned the same problem solved it by 
 switching to an English character set. Actually I change the setting to 
 English each time when I use Caret, or I can't even load spec files. Now I 
 succeed in loading spec files. And when I try the other option in Map Volumes 
 to Surfaces: Paint (ROI) or Probabilistic Atlas Data, it works. Could you 
 give me any suggestion on possible solution? Thanks a lot!
 
 Tony
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users

___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


  1   2   3   4   5   6   >