Re: [caret-users] Hi res recon

2014-10-18 Thread Dr. Aditya Tri Hernowo, Ph.D
Dear Donna, Colin, and David,

Thank you all for the input. Turning off automatic error correction helped in 
significantly cutting off processing time. I managed to reconstruct with 
additional manual labour. It worked reasonably well. The topological errors 
were not that many, especially when we're only interested in a certain roi. 

Thanks again everyone.

Aditya

David Van Essen vanes...@wustl.edu wrote:

Alternatively, you may find that FreeSurfer will work better for your current 
needs.  Matt Glasser and others have gotten it to work reasonably well on 
macaque structural images.

David

On Oct 15, 2014, at 2:38 AM, Donna Dierker do...@brainvis.wustl.edu wrote:

 Hi Aditya,
 
 On monkeys, yes.  Humans, no.  The SureFit algorithm that is in Caret's 
 segmentation feature was designed for cubic 1mm human data.  It worked 
 reasonably well on higher res monkey data, but some of the subroutines will 
 likely break on higher res human data (e.g., disconnecting eye, skull, 
 hindbrain).
 
 I'd turn all error correction features off and sanity check the initial 
 segmentation.  If the skull, eye, or hindbrain is still connected, then 
 resolving that issue should precede the error correction steps.  
 Unfortunately, that will likely take some work.
 
 Donna
 
 
 On Oct 14, 2014, at 5:25 AM, Dr. Aditya Tri Hernowo, Ph.D 
 adityatrihern...@gmail.com wrote:
 
 Dear users  experts,
 
 Does anyone have any experience with reconstructing the cortex on 0.5mm 
 resolution T1 images? I am still having problems with the very long time it 
 takes to perform automatic error correction (more than 3 hours before the 
 software finally crashed).
 
 Regards,
 
 Aditya Hernowo
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users

___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Hi res recon

2014-10-14 Thread Dr. Aditya Tri Hernowo, Ph.D
Dear users  experts,

Does anyone have any experience with reconstructing the cortex on 0.5mm 
resolution T1 images? I am still having problems with the very long time it 
takes to perform automatic error correction (more than 3 hours before the 
software finally crashed).

Regards,

Aditya Hernowo

___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Hi res recon

2014-10-14 Thread Donna Dierker
Hi Aditya,

On monkeys, yes.  Humans, no.  The SureFit algorithm that is in Caret's 
segmentation feature was designed for cubic 1mm human data.  It worked 
reasonably well on higher res monkey data, but some of the subroutines will 
likely break on higher res human data (e.g., disconnecting eye, skull, 
hindbrain).

I'd turn all error correction features off and sanity check the initial 
segmentation.  If the skull, eye, or hindbrain is still connected, then 
resolving that issue should precede the error correction steps.  Unfortunately, 
that will likely take some work.

Donna


On Oct 14, 2014, at 5:25 AM, Dr. Aditya Tri Hernowo, Ph.D 
adityatrihern...@gmail.com wrote:

 Dear users  experts,
 
 Does anyone have any experience with reconstructing the cortex on 0.5mm 
 resolution T1 images? I am still having problems with the very long time it 
 takes to perform automatic error correction (more than 3 hours before the 
 software finally crashed).
 
 Regards,
 
 Aditya Hernowo
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Hi res recon

2014-10-14 Thread David Van Essen
Alternatively, you may find that FreeSurfer will work better for your current 
needs.  Matt Glasser and others have gotten it to work reasonably well on 
macaque structural images.

David

On Oct 15, 2014, at 2:38 AM, Donna Dierker do...@brainvis.wustl.edu wrote:

 Hi Aditya,
 
 On monkeys, yes.  Humans, no.  The SureFit algorithm that is in Caret's 
 segmentation feature was designed for cubic 1mm human data.  It worked 
 reasonably well on higher res monkey data, but some of the subroutines will 
 likely break on higher res human data (e.g., disconnecting eye, skull, 
 hindbrain).
 
 I'd turn all error correction features off and sanity check the initial 
 segmentation.  If the skull, eye, or hindbrain is still connected, then 
 resolving that issue should precede the error correction steps.  
 Unfortunately, that will likely take some work.
 
 Donna
 
 
 On Oct 14, 2014, at 5:25 AM, Dr. Aditya Tri Hernowo, Ph.D 
 adityatrihern...@gmail.com wrote:
 
 Dear users  experts,
 
 Does anyone have any experience with reconstructing the cortex on 0.5mm 
 resolution T1 images? I am still having problems with the very long time it 
 takes to perform automatic error correction (more than 3 hours before the 
 software finally crashed).
 
 Regards,
 
 Aditya Hernowo
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users
 
 
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Hi,

2013-11-09 Thread 苏龙飞
 How to make the surface transparent? If you can provide some advice
for  making ROI and CONNECTION on the transparent brain surface, it will be
much more helpful to me. In fact, i want to plot the brain map used in the
paper Prediction of individual brain maturity using
fMRIhttp://www.sciencemag.org/content/329/5997/1358.short,
can some body did this work? Thank you very much.

-- 
Sincerely
LongfeiSu
Ph. D. Candidate,
College of Mechatronics and Automation,
National University of Defense Technology,
Add: No.47 Yanwachizheng Street,
Changsha, Hunan, 410073, China, PR
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Hi,

2013-11-08 Thread 苏龙飞
   I have generated two sample t-test results spmT_0001.hdr/img, and i
tried to map it to the surface. The result was strange, the left and the
right hemisphere were activated completely the same. In fact, the left and
the right hemisphere were activated different. How can i  get the correct
results when the left and the right hemisphere were activated differently?

-- 
Sincerely
LongfeiSu
Ph. D. Candidate,
College of Mechatronics and Automation,
National University of Defense Technology,
Add: No.47 Yanwachizheng Street,
Changsha, Hunan, 410073, China, PR
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Hi,

2013-11-08 Thread Donna Dierker
This could happen easily if you are using Caret5 and the same 
visualization specification for both hemispheres.  If you have both 
metric files loaded, but the same metric overlay is set to apply to All 
surfaces (see top of metric selection page selection on D/C menu).


Are you mapping to PALS_B12 or fs_LR?  September 2006 tutorial has 
separate LEFT and RIGHT standard scenes visualization specifications.  I 
confess I still use them, to keep myself un-confused.


If you do have both the LEFT and RIGHT mapping metric columns loaded in 
the same session, try toggling from one to the other.  Are they still 
identical?  (Alternatively, you can press the H button on the D/C metric 
selection menu for each column, and see if they are identical.)  If they 
really are identical, then something went wrong during the mapping 
(target surface selection, possibly).


But let's start with these checks before building a whole tree of 
possibilities. ;-)



On 11/08/2013 10:33 AM, ??? wrote:
   I have generated two sample t-test results spmT_0001.hdr/img, and i 
tried to map it to the surface. The result was strange, the left and 
the right hemisphere were activated completely the same. In fact, the 
left and the right hemisphere were activated different. How can i  get 
the correct results when the left and the right hemisphere were 
activated differently?


--
Sincerely
Longfei Su
Ph. D. Candidate,
College of Mechatronics and Automation,
National University of Defense Technology,
Add: No.47 Yanwachizheng Street,
Changsha, Hunan, 410073, China, PR


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] hi res macaque

2011-10-25 Thread Colin Reveley
Hello.

I have macaque data that is 0.25mm. I like that. I can do things with it
that are more than cosmetic.

The data was taken with a fancy brukker, and the contrast is very good from
the sequence used. so good I wonder if it's a problem (it's a FLASH_MTR - it
does correlate to T1 really closely, but contrast GM-WM is clearer. and
there may be differences.)

Thus far, I've been downsampling to 0.5 to make CARET surfaces.

I'm beginning to suspect that, for my project, there is profit in a surface
made at 0.25, with many nodes.

what I am interested in is the really quite small region (in absolute terms)
that was the subject of the paper by lewis and van essen in 2000.

even though the F99 atlas does not have 300,000 nodes the paint, border and
metric data are scalable and my own data would indeed support a hi-res
surface, and benefit from it.

I've got RAM.

but I never managed to get far with 0.25.

segmentation fails with hindbrain at any resolution below 0.5.

I didn't mind. But now I think (I really do) I have a good reason to seek
surface construction directly from my structural data at 0.25mm.

So: is it possible? caret_command ... -res=X ?

my data is ex-vivo. And probably no more than 1% of nonGM or nonWM voxels
are nonzero. no ventricles. nothing. I did that. A mistake maybe.

If I segment at 0.5, upsample to 0.25 and generate a surface with my data it
works. CARET can make the surface.

but segmentation does not work.

appreciate help.

Colin
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] hi res macaque

2011-10-25 Thread wolf zinke

Hi,

I had a similiar question a while ago, that was related to your issues. 
At this time there was no 64 bit version of caret available, and hence I 
ran into memory troubles with a high resolution monkey file (do you use 
the 64 bit version of caret?).


However, Donna Dierker gave me some pointers, why it would be 
problematic to use a voxel size different from 0.5 mm. I am not sure, if 
this is still true, but that might be a reason for your problem. I 
myself had also the impression, that for the segmentation a resolution 
of 0.25 mm would be very beneficial.


I hope that this information helps,
wolf


Hi,

Thanks for the clarification. Currently,I am running caret on the 0.5 
mm resolution and it overall gives good results, but fails in 
occipital regions. However, with your information I have a good reason 
to stick to the 0.5 mm resolution - makes the manual correction faster 
anyway.


thanks for the reply,
wolf



On 01/28/2010 04:03 PM, Donna Dierker wrote:
Setting aside the memory/64-bit question, there are assumptions built 
into the SureFit algorithm that assume voxdims for monkeys around 
0.5-0.75mm cubed.  For example, there are routines in the hindbrain 
removal that are based on number of *slices* from the AC, and if you 
double the resolution, those will be off by a factor of two.  In 
short, it will fail.


Try downsampling to 0.5 and making sure you crop to left and right 
hems.  If the problems persist, upload your anatomical volume here:


http://pulvinar.wustl.edu/cgi-bin/upload.cgi

On 01/28/2010 07:30 AM, wolf zinke wrote:

Hi,

Sorry that I did not reply directly to this thread, but I did not 
find any option for this reply.


Is there a reason why caret is not build for 64 bit systems? I tried 
to run a segmentation on macaque data with 0.25 mm voxel size, hoping 
to get better results due to the resoltion. However, Caret threw an 
error about insufficient memory, which first puzzled me since the PC 
got 32GB. But than I realized that due to the 32 bit, Caret is not 
able to address more than 4GB of the RAM, right?


cheers,
wolf



On 25/10/11 08:30, Colin Reveley wrote:
I wonder if the bits of spine and the affine to to (deskulled, 
upsampled) F99 are an issue.


I don't recall, but it's hugely likely I tried with rigid body too, 
i.e. without the bit cut off at bottom.


On 25 October 2011 00:53, Colin Reveley cm...@sussex.ac.uk 
mailto:cm...@sussex.ac.uk wrote:



Hello.

I have macaque data that is 0.25mm. I like that. I can do things
with it that are more than cosmetic.

The data was taken with a fancy brukker, and the contrast is very
good from the sequence used. so good I wonder if it's a problem
(it's a FLASH_MTR - it does correlate to T1 really closely, but
contrast GM-WM is clearer. and there may be differences.)

Thus far, I've been downsampling to 0.5 to make CARET surfaces.

I'm beginning to suspect that, for my project, there is profit in
a surface made at 0.25, with many nodes.

what I am interested in is the really quite small region (in
absolute terms) that was the subject of the paper by lewis and van
essen in 2000.

even though the F99 atlas does not have 300,000 nodes the paint,
border and metric data are scalable and my own data would indeed
support a hi-res surface, and benefit from it.

I've got RAM.

but I never managed to get far with 0.25.

segmentation fails with hindbrain at any resolution below 0.5.

I didn't mind. But now I think (I really do) I have a good reason
to seek surface construction directly from my structural data at
0.25mm.

So: is it possible? caret_command ... -res=X ?

my data is ex-vivo. And probably no more than 1% of nonGM or nonWM
voxels are nonzero. no ventricles. nothing. I did that. A mistake
maybe.

If I segment at 0.5, upsample to 0.25 and generate a surface with
my data it works. CARET can make the surface.

but segmentation does not work.

appreciate help.

Colin



___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] hi res macaque

2011-10-25 Thread Donna Dierker
Sorry I don't have time to read this more carefully, but I'm swamped this week.

But the quick scan leads me to believe this is an issue with you trying to 
segment monkey data at a resolution above 0.5mm.

The problem is that some of the routines (e.g., especially eye/skull/hindbrain 
removal) depend on the number of slices away from the AC something is.  If the 
number of slices is twice what it expects, it won't work.  You can turn off 
eye/skull removal, but as you already know, de-checking hindbrain just makes it 
fail.

Caret's segmentation has its limits, and this is now where our development 
effort is focused these days.  Sorry.

I know some people have gotten Freesurfer to segment monkey data, but I suspect 
there are tricks/tweaks, and I do not know them.  I don't know how the 
talairach.xfm stuff (and that which depends on it) works, for example.  
Obviously MNI305 won't work as a target.


On Oct 25, 2011, at 7:54 AM, wolf zinke wrote:

 Hi,
 
 I had a similiar question a while ago, that was related to your issues. At 
 this time there was no 64 bit version of caret available, and hence I ran 
 into memory troubles with a high resolution monkey file (do you use the 64 
 bit version of caret?).
 
 However, Donna Dierker gave me some pointers, why it would be problematic to 
 use a voxel size different from 0.5 mm. I am not sure, if this is still true, 
 but that might be a reason for your problem. I myself had also the 
 impression, that for the segmentation a resolution of 0.25 mm would be very 
 beneficial.
 
 I hope that this information helps,
 wolf
 
 Hi, 
 
 Thanks for the clarification. Currently,I am running caret on the 0.5 mm 
 resolution and it overall gives good results, but fails in occipital 
 regions. However, with your information I have a good reason to stick to the 
 0.5 mm resolution - makes the manual correction faster anyway. 
 
 thanks for the reply, 
 wolf 
 
 
 
 On 01/28/2010 04:03 PM, Donna Dierker wrote: 
 Setting aside the memory/64-bit question, there are assumptions built into 
 the SureFit algorithm that assume voxdims for monkeys around 0.5-0.75mm 
 cubed.  For example, there are routines in the hindbrain removal that are 
 based on number of *slices* from the AC, and if you double the resolution, 
 those will be off by a factor of two.  In short, it will fail. 
 
 Try downsampling to 0.5 and making sure you crop to left and right hems.  If 
 the problems persist, upload your anatomical volume here: 
 
 http://pulvinar.wustl.edu/cgi-bin/upload.cgi 
 
 On 01/28/2010 07:30 AM, wolf zinke wrote: 
 Hi, 
 
 Sorry that I did not reply directly to this thread, but I did not find any 
 option for this reply. 
 
 Is there a reason why caret is not build for 64 bit systems? I tried to run 
 a segmentation on macaque data with 0.25 mm voxel size, hoping to get 
 better results due to the resoltion. However, Caret threw an error about 
 insufficient memory, which first puzzled me since the PC got 32GB. But than 
 I realized that due to the 32 bit, Caret is not able to address more than 
 4GB of the RAM, right? 
 
 cheers, 
 wolf 
 
 
 On 25/10/11 08:30, Colin Reveley wrote:
 I wonder if the bits of spine and the affine to to (deskulled, upsampled) 
 F99 are an issue.
 
 I don't recall, but it's hugely likely I tried with rigid body too, i.e. 
 without the bit cut off at bottom. 
 
 On 25 October 2011 00:53, Colin Reveley cm...@sussex.ac.uk wrote:
 
 Hello.
 
 I have macaque data that is 0.25mm. I like that. I can do things with it 
 that are more than cosmetic.
 
 The data was taken with a fancy brukker, and the contrast is very good from 
 the sequence used. so good I wonder if it's a problem (it's a FLASH_MTR - it 
 does correlate to T1 really closely, but contrast GM-WM is clearer. and 
 there may be differences.)
 
 Thus far, I've been downsampling to 0.5 to make CARET surfaces.
 
 I'm beginning to suspect that, for my project, there is profit in a surface 
 made at 0.25, with many nodes.
 
 what I am interested in is the really quite small region (in absolute terms) 
 that was the subject of the paper by lewis and van essen in 2000.
 
 even though the F99 atlas does not have 300,000 nodes the paint, border and 
 metric data are scalable and my own data would indeed support a hi-res 
 surface, and benefit from it.
 
 I've got RAM. 
 
 but I never managed to get far with 0.25.
 
 segmentation fails with hindbrain at any resolution below 0.5.
 
 I didn't mind. But now I think (I really do) I have a good reason to seek 
 surface construction directly from my structural data at 0.25mm.
 
 So: is it possible? caret_command ... -res=X ?
 
 my data is ex-vivo. And probably no more than 1% of nonGM or nonWM voxels 
 are nonzero. no ventricles. nothing. I did that. A mistake maybe.
 
 If I segment at 0.5, upsample to 0.25 and generate a surface with my data it 
 works. CARET can make the surface.
 
 but segmentation does not work.
 
 appreciate help.
 
 Colin
 
 
 

Re: [caret-users] hi res macaque

2011-10-25 Thread Donna Dierker
this is now where our development effort is focused these days should have 
read this is NOT where our development effort is focused these days

On Oct 25, 2011, at 9:33 AM, Donna Dierker wrote:

 Sorry I don't have time to read this more carefully, but I'm swamped this 
 week.
 
 But the quick scan leads me to believe this is an issue with you trying to 
 segment monkey data at a resolution above 0.5mm.
 
 The problem is that some of the routines (e.g., especially 
 eye/skull/hindbrain removal) depend on the number of slices away from the AC 
 something is.  If the number of slices is twice what it expects, it won't 
 work.  You can turn off eye/skull removal, but as you already know, 
 de-checking hindbrain just makes it fail.
 
 Caret's segmentation has its limits, and this is now where our development 
 effort is focused these days.  Sorry.
 
 I know some people have gotten Freesurfer to segment monkey data, but I 
 suspect there are tricks/tweaks, and I do not know them.  I don't know how 
 the talairach.xfm stuff (and that which depends on it) works, for example.  
 Obviously MNI305 won't work as a target.
 
 
 On Oct 25, 2011, at 7:54 AM, wolf zinke wrote:
 
 Hi,
 
 I had a similiar question a while ago, that was related to your issues. At 
 this time there was no 64 bit version of caret available, and hence I ran 
 into memory troubles with a high resolution monkey file (do you use the 64 
 bit version of caret?).
 
 However, Donna Dierker gave me some pointers, why it would be problematic to 
 use a voxel size different from 0.5 mm. I am not sure, if this is still 
 true, but that might be a reason for your problem. I myself had also the 
 impression, that for the segmentation a resolution of 0.25 mm would be very 
 beneficial.
 
 I hope that this information helps,
 wolf
 
 Hi, 
 
 Thanks for the clarification. Currently,I am running caret on the 0.5 mm 
 resolution and it overall gives good results, but fails in occipital 
 regions. However, with your information I have a good reason to stick to 
 the 0.5 mm resolution - makes the manual correction faster anyway. 
 
 thanks for the reply, 
 wolf 
 
 
 
 On 01/28/2010 04:03 PM, Donna Dierker wrote: 
 Setting aside the memory/64-bit question, there are assumptions built into 
 the SureFit algorithm that assume voxdims for monkeys around 0.5-0.75mm 
 cubed.  For example, there are routines in the hindbrain removal that are 
 based on number of *slices* from the AC, and if you double the resolution, 
 those will be off by a factor of two.  In short, it will fail. 
 
 Try downsampling to 0.5 and making sure you crop to left and right hems.  
 If the problems persist, upload your anatomical volume here: 
 
 http://pulvinar.wustl.edu/cgi-bin/upload.cgi 
 
 On 01/28/2010 07:30 AM, wolf zinke wrote: 
 Hi, 
 
 Sorry that I did not reply directly to this thread, but I did not find any 
 option for this reply. 
 
 Is there a reason why caret is not build for 64 bit systems? I tried to 
 run a segmentation on macaque data with 0.25 mm voxel size, hoping to get 
 better results due to the resoltion. However, Caret threw an error about 
 insufficient memory, which first puzzled me since the PC got 32GB. But 
 than I realized that due to the 32 bit, Caret is not able to address more 
 than 4GB of the RAM, right? 
 
 cheers, 
 wolf 
 
 
 On 25/10/11 08:30, Colin Reveley wrote:
 I wonder if the bits of spine and the affine to to (deskulled, upsampled) 
 F99 are an issue.
 
 I don't recall, but it's hugely likely I tried with rigid body too, i.e. 
 without the bit cut off at bottom. 
 
 On 25 October 2011 00:53, Colin Reveley cm...@sussex.ac.uk wrote:
 
 Hello.
 
 I have macaque data that is 0.25mm. I like that. I can do things with it 
 that are more than cosmetic.
 
 The data was taken with a fancy brukker, and the contrast is very good from 
 the sequence used. so good I wonder if it's a problem (it's a FLASH_MTR - 
 it does correlate to T1 really closely, but contrast GM-WM is clearer. and 
 there may be differences.)
 
 Thus far, I've been downsampling to 0.5 to make CARET surfaces.
 
 I'm beginning to suspect that, for my project, there is profit in a surface 
 made at 0.25, with many nodes.
 
 what I am interested in is the really quite small region (in absolute 
 terms) that was the subject of the paper by lewis and van essen in 2000.
 
 even though the F99 atlas does not have 300,000 nodes the paint, border and 
 metric data are scalable and my own data would indeed support a hi-res 
 surface, and benefit from it.
 
 I've got RAM. 
 
 but I never managed to get far with 0.25.
 
 segmentation fails with hindbrain at any resolution below 0.5.
 
 I didn't mind. But now I think (I really do) I have a good reason to seek 
 surface construction directly from my structural data at 0.25mm.
 
 So: is it possible? caret_command ... -res=X ?
 
 my data is ex-vivo. And probably no more than 1% of nonGM or nonWM voxels 
 are nonzero. no ventricles. nothing. I did