[caret-users] Reconstruct into surface command line

2013-11-13 Thread Tristan Chaplin
Hi,

Does the caret have a command line function to reconstruct a surface from
contours, equivalent to the GUI: Layers-Reconstruct into Surface? I can't
seem to find it.

Cheers,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Volume segmentation error: RadialPositionMap

2013-07-29 Thread Tristan Chaplin
Hi Donna,

Thanks, but that -volume-segment command is what I tried in the first email
and it produces the error I mentioned. It appears there is no command
equivalent for the GUI Volume - Segmentation - Reconstruct into surface,
I'll just have to work around it for now.

Cheers,
Tristan

On Mon, Jul 29, 2013 at 11:58 PM, Donna Dierker do...@brainvis.wustl.eduwrote:

 Yes, I think it does require a surface as input.  You can generate it in
 the GUI, like you cite below, or you can see if you can get this to work,
 toggling off 1-6 and toggling on what you need in 7-17.  I never used this
 much, and I'm not sure it will work in your case, but the fact that it
 takes an input segmentation volume is encouraging.

   caret_command -volume-segment
  input-anatomy-volume-file-name
  input-segmentation-volume-file-name
  spec-file-name
  operation-code
  gray-peak
  white-peak
  padding-code
  structure
  error-correction-method
  write-volume-type

  Perform segmentation operations.

Operation_Code characters
   Specify each with either a Y or N.
   All characters must be specified.
   Character   Operation Description
   -   -
   1   Disconnect Eye and Skull
   2   Disconnect Hindbrain
   3   Use High Threshold for Hindbrain
 disconnection
   4   Cut Corpus Callossum
   5   Generate Segmentation
   6   Fill Ventricles
   7   Generate Raw and Fiducial Surfaces
   8   Reduce polygons in surfaces
   9   Correct topological errors in surfaces
  10   Generate Inflated Surface
  11   Generate Very Inflated Surface
  12   Generate Ellipsoid Surface (For Flattening)
  13   Generate Spherical Surface
  14   Generate Comp Med Wall Surface
  15   Generate Hull Surface
  16   Generate Curvature, Depth, and Paint
 Attributes
  17   Generate Registration and Flattening
 Landmark Borders

gray-peak  specifies the intensity of the gray matter peak
 in the
   anatomy volume.

white-peak  specifies the intensity of the white matter
 peak in the
anatomy volume.

padding-code
   Specify padding for any cut faces when segmenting a
 partial hemisphere.
   Specify each with either a Y for padding or N for no
 padding.
   All characters must be specified.
   Character   Padding Description
   -   ---
   1   Pad Negative X
   2   Pad Positive X
   3   Pad Posterior Y
   4   Pad Anterior Y
   5   Pad Inferior Z
   6   Pad Superior Z

structure  Specifies the brain structure.
Acceptable values are RIGHT or LEFT

spec-file-name  Name of specification file.

input-anatomy-volume-file-name
   If there is not an anatomy volume file, leave this
   item blank (two consecutive double quotes).

input-segmentation-volume-file-name
   If there is not a segmentation volume file, leave this
   item blank (two consecutive double quotes).

error-correction-method
   NONE
   GRAPH
   SUREFIT
   SUREFIT_THEN_GRAPH
   GRAPH_THEN_SUREFIT

write-volume-type   Type of volume files to write.
   Specifies the type of the volume files that will be
 written
   during the segmentation process.  Valid values are
  AFNI
  NIFTI
  NIFTI_GZIP   (RECOMMENDED)
  SPM
  WUNIL

All input volumes must be in a Left-Posterior-Inferior
 orientation
and their stereotaxic coordinates must be set so that the
 origin is
at the anterior commissure.


 On Jul 28, 2013, at 10:31 PM, Tristan Chaplin tristan.chap...@gmail.com
 wrote:

  But doesn't this require you already have a surface? I'm trying to
 create a surface from a segmentation volume. In the same way the GUI
 operation of Volume - Segmentation - Reconstruct into surface.
 
  On Fri, Jul 26, 2013 at 2:00 AM, Donna Dierker 
 donna.dier

Re: [caret-users] Volume segmentation error: RadialPositionMap

2013-07-28 Thread Tristan Chaplin
But doesn't this require you already have a surface? I'm trying to create a
surface from a segmentation volume. In the same way the GUI operation of Volume
- Segmentation - Reconstruct into surface.

On Fri, Jul 26, 2013 at 2:00 AM, Donna Dierker
donna.dier...@sbcglobal.netwrote:

 Sounds like you have a segmentation already, so don't use -volume-segment.

 Try something like this instead:

   caret_command -surface-identify-sulci $SPECFNAME $HEM $SEGVOL $TOPO
 $FIDUCIAL $FIDUCIAL

 ---
   caret_command -surface-identify-sulci
  spec-file-name
  structure
  segmentation-volume-file-name
  closed-topology-file-name
  raw-coordinate-file-name
  fiducial-coordinate-file-name
  volume-write-type

  Identify Sulci with shape and paint.

  Create a surface shape file containing depth and curvature
 measurements,
  a paint file identifying the sulci, and an area color file.  If
 there
  is no raw coordinate file, specify fiducial coordinate file
 instead.

  NOTE: This command MUST be run in the directory containing the
 files.

  structure  Specifies the brain structure.
 Acceptable values are RIGHT or LEFT

  write-volume-type   Type of volume files to write.
 Specifies the type of the volume files that will be written
  during the segmentation process.  Valid values are:
 AFNI
 NIFTI
 NIFTI_GZIP (RECOMMENDED)
 SPM
 WUNIL


 On Jul 24, 2013, at 9:14 PM, Tristan Chaplin tristan.chap...@gmail.com
 wrote:

  Hi,
 
  I am trying to create a surface from a segmentation volume using the
 command line. The anatomy volume was not segmented with Caret. When I do
 this:
 
  caret_command -volume-segment  seg.nii Other.Case.L.spec
 NNYYN 1 0 NN LEFT SUREFIT_THEN_GRAPH NIFTI_GZIP
 
  I get:
 
  VOLUME SEGMENTATION ERROR: Unable to find volume file
 RadialPositionMap+orig.*
 
  I also get the same error when using the GUI, Volume - Segmentation
 Operations (Surefit) ...
 
  But if I use the GUI and use Volume - Segmentation - Reconstruct into
 surface, it works.
 
  I'm using Caret 5.65 on OSX 10.8.4.  Any ideas why this is happening?
 
  Cheers,
  Tristan
 
 
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users


 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users

___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Volume segmentation error: RadialPositionMap

2013-07-24 Thread Tristan Chaplin
Hi,

I am trying to create a surface from a segmentation volume using the
command line. The anatomy volume was not segmented with Caret. When I do
this:

caret_command -volume-segment  seg.nii Other.Case.L.spec
NNYYN 1 0 NN LEFT SUREFIT_THEN_GRAPH NIFTI_GZIP

I get:

VOLUME SEGMENTATION ERROR: Unable to find volume file
RadialPositionMap+orig.*

I also get the same error when using the GUI, Volume - Segmentation
Operations (Surefit) ...

But if I use the GUI and use Volume - Segmentation - Reconstruct into
surface, it works.

I'm using Caret 5.65 on OSX 10.8.4.  Any ideas why this is happening?

Cheers,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Body weight of the F99UA1 case

2013-06-23 Thread Tristan Chaplin
Thanks David and Colin, I'll look into the references you provided.

Regards,
Tristan

On Sun, Jun 23, 2013 at 2:41 PM, David Van Essen vanes...@wustl.edu wrote:

 Tristan,

 I was unable to find the weight from Nikos Logothetis, from whose lab it
 was scanned.  However, you can get information on F99 brain size relative
 to the brain sizes of the McLaren et al. 112RM-SL atlas in Table 1 from:
 Cortical parcellations of the macaque monkey analyzed on surface-based
 atlases. http://www.ncbi.nlm.nih.gov/pubmed/22052704
 *Van Essen DC*, Glasser MF, Dierker DL, Harwell J.
 Cereb Cortex. 2012 Oct;22(10):2227-40. doi: 10.1093/cercor/bhr290. Epub
 2011 Nov 2.
  PMID:  22052704

 David

 On Jun 17, 2013, at 7:34 PM, Tristan Chaplin wrote:

 Hi,

 Does anyone know the weight of the F99UA1 macaque? I can't seem to find it
 anywhere.

 Thanks,
 Tristan
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users



 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Mirroring hemispheres and Landmark Vector Difference registration

2012-03-26 Thread Tristan Chaplin
Hi David,

Thanks, I didn't realise the spherical mesh also had to be in LPI
orientation as well, I've fixed that.

The borders are now roughly in the correct regions after registration but
they are nowhere near as good as LPR.  It almost looks as the borders
haven't actually been morphed - as if they've just been projected from
sphere to another (which would explain my original problem).  Is there a
chance I've done something that has stopped the LVD morphing from working?
 Something that only gives me straight sphere to sphere registration?

I've tried changing the parameters but it doesn't seem to affect anything
(number of morphing cycles, forces, vector displacement).

I've just re-downloaded the current version of Caret (64 bit version) and
it's the same.  I'm using OS X 10.6.

Any help or suggestions about what is going on here would be greatly
appreciated.

Cheers,
Tristan



On Sat, Mar 24, 2012 at 02:04, David Van Essen vanes...@wustl.edu wrote:

 Tristan,

 Registering from a left hem source to a right hem target (or vice versa)
 should work for the LVD as well as the LPR algorithm.  I've done it myself
 on multiple occasions.

 Importantly, your source and target spec files must have the correct
 hemisphere assignment.  Check that 'Structure' is correct when you open
 each spec file, and change it if necessary.  If the hemispheres differ,
 this leads to the requisite mirror-flipping, as is now explained in:


 http://brainvis.wustl.edu/wiki/index.php/Caret:Operations/SurfaceBasedRegistration#Algorithm_for_All_Registration_Operations
 * If the structure (left or right) differs in the source and target
 spec files, mirror-flip the source landmarks.


 Another item is to verify that the left and right spheres are correctly
 oriented in a dorsal view (medial wall is on the correct side, that is, if
 viewing the right sphere, the medial wall should be on the left in a dorsal
 view.  For a left hem, it should be on the right.).

 It's puzzling that you had a problem only for LVD and not LPR, but I'm
 guessing that one of the above will take care of the problem. If not, let
 me know offline, as we might need to take a look at your dataset.

 David

 On Mar 23, 2012, at 1:15 AM, Tristan Chaplin wrote:

 Hi,

 I've been registering left hemispheres to a right hemispheres using the
 landmark pinned relaxation algorithm and getting reasonable results, so I
 presume process automatically mirrors hemispheres so they match.

 I wanted to try the new landmark vector difference algorithm.  Using
 similar settings the registration completes but all the borders are not
 matched and are in completely the wrong spot - e.g. medial wall is on the
 lateral side and appear to be upside down.  Is there a chance it's not
 mirroring? Or have I just not got the settings right?

 Thanks,
 Tristan
 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users



 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Mirroring hemispheres and Landmark Vector Difference registration

2012-03-23 Thread Tristan Chaplin
Hi,

I've been registering left hemispheres to a right hemispheres using the
landmark pinned relaxation algorithm and getting reasonable results, so I
presume process automatically mirrors hemispheres so they match.

I wanted to try the new landmark vector difference algorithm.  Using
similar settings the registration completes but all the borders are not
matched and are in completely the wrong spot - e.g. medial wall is on the
lateral side and appear to be upside down.  Is there a chance it's not
mirroring? Or have I just not got the settings right?

Thanks,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Open vs closed borders for registration

2012-03-19 Thread Tristan Chaplin
Hi,

I just wondering, is there advantage/disadvantage to using open or closed
borders in registration?  e.g. why is the medial wall two borders and not a
single closed border?  I had heard a while back that originally borders
could only be drawn on flat surfaces so this must have been necessary then,
but now you can draw borders on the 3d surface.

Is there something simpler and thus better about using open borders for
registration?

I was wondering, if you were placing a border on a cortical area with that
has a topographic organization, e.g. V1, is it better to use two borders,
one for the upper field boundary, one for the lower field boundary, to give
it some indication of the topology?  I figure the closed border has a
specific start point, end point and order, so if you do it the same in both
source and target then it should match up in the same way.

I have been reading Van Essen et al. (2011) Cortical Parcellations of the
Macaque Monkey Analyzed on Surface-Based Atlases, but I can't seem to work
it out from based on the description of the algorithm there.  It seems like
it would make no difference if closed or open borders are used.

Any insight or practical experience would be greatly appreciated.

Cheers,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] marmoset

2012-03-18 Thread Tristan Chaplin
Colin, the person working on the marmoset atlas was me.  My boss, Marcello
Rosa, and his colleagues Paxinos, Watson, Petrides and Tokuna recently
published an atlas in book form:

http://www.amazon.com/The-Marmoset-Brain-Stereotaxic-Coordinates/dp/0124158188/ref=sr_1_4?s=booksie=UTF8qid=1332126409sr=1-4

I've made surface models from this data, I'll send you a private email with
more info, if anyone else is interested let me know.

Cheers,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Interspecies comparisons - creating a new atlas for a different primate species

2012-02-22 Thread Tristan Chaplin
Ah I think that change resolution command is what I want. It made a nice
fiducial mesh for me.  I must admit I hadn't payed much attention to the
command line caret features, I see now there is a lot of cool things in
there.  If run into any problems later with the actual registration I'll
post a new topic.

One quick question though - does flat morphing work better if the fiducial
mesh is first standardised?

On Wed, Feb 22, 2012 at 05:47, Timothy Coalson tsc...@mst.edu wrote:

 The way I understand it, if you already have an atlas sphere, you can (and
 probably should) register the native mesh to the spherical atlas directly,
 and the result is that (among other things) you get a new surface that
 aligns with your native surface, but has topology and node spacing similar
 to the atlas sphere.

 However, if you do not yet have an atlas sphere, the way I understand it,
 we made our atlas sphere without any dependence on the native mesh, by
 taking the borders on the subject sphere and unprojecting them to not rely
 on a mesh (then averaging them across subjects, but since you have only one
 subject, this doesn't apply), and then projecting them onto the sphere we
 wanted to use, giving us our atlas sphere with landmarks.  Again though, I
 haven't done this, it is just my understanding of how it was described to
 me.

 If what you want is a surface with more regular spacing than what you have
 in the subject native surface, you can do that and then register to the new
 atlas sphere, or maybe just use the subject sphere and a new sphere to
 generate a deformation map, and deform the subject native surfaces to the
 new mesh (this is approximately what spec file change resolution does).

 Tim


 On Mon, Feb 20, 2012 at 6:55 PM, Tristan Chaplin 
 tristan.chap...@gmail.com wrote:

 Sorry I should have specified this before, for our atlas we have only a
 single indvidual with cytoarchitecture.  We have a native mesh for this and
 the associated morphed spherical and flat meshes, as well as
 the cytoarchitecture as a paint file.

 I thought that to register this atlas to another individual, or for
 interespecies registration (which is want we really want to do) it
 was necessary to make standard fiducial mesh which has evenly spaced nodes,
 rather than the native mesh created after reconstruction from contours.

 I think I understand how to make a standard spherical mesh but do I need
 to make a standard fiducial mesh to allow intra and interspecies
 registration?

 Cheers,
 Tristan


 On Sat, Feb 18, 2012 at 05:31, Timothy Coalson tsc...@mst.edu wrote:

 The new atlases we are making (I think they may be included in the 5.65
 release, but I am not sure, the fs_LR atlases are the ones I mean) use this
 new kind of sphere.  If you want to take a look at node spacing regularity,
 there is an option in caret to generate the node areas of a surface under
 Surface-Region Of Interest Operations...

 Select all the nodes (clicking select with the default settings should
 do this), click next, select Assign metric with node areas, click the
 Assign Metric Node Areas button, and there you have it.  Of course, the
 node regularity on the sphere doesn't translate directly to node regularity
 on subject surfaces, there is distortion inherent to registering on a
 sphere, since the brain isn't a sphere, but it should help.

 The new sphere code is only used in a few commands, so I would have to
 know more about what commands generate the surfaces in your current methods
 to hazard a guess at whether you would need to do something different to
 get a new sphere.

 Tim


 On Fri, Feb 17, 2012 at 11:02 AM, Colin Reveley cm...@sussex.ac.ukwrote:

 Tim - what you say is interesting.

 I have actually wondered about node spacing in fiducial surfaces
 registered to F99 via macaque.sphere6.

 It's not always 100% super straight forward to register (without lots
 of crossovers and issues). I'm fairly pleased with what I have. the matches
 are quite good.

 however, for my purposes, a node spacing that is a regular as possible
 in the context just of registering my surface to F99 has real advantages,
 because I use nodes as tractography seeds and I'd like their spacing to be
 roughly even.

 Might I benefit from trying your new approach? How hard would it be?
 f99 is still 73730, as are all the atlas files. DVE's most recent free
 surfer macaque to F99 tutorial still very much uses 73730.

 My surfaces are from FS and look pretty evenly spaced. So maybe
 register F99 on to my mesh, and make a deform_map for the F99 data?
 essentially following the menu driven landmark pinned reg.

 Other than fiducials (WM,GM, mean) the topos and other surfaces are
 made with caret operations. I'm guessing if I repeat those operations with
 caret5.65, it will follow the new scheme of things in terms of how node
 spacing is decided?

 Colin Reveley, sussex.

 On 17 February 2012 05:17, caret-users-requ...@brainvis.wustl.eduwrote:

 Send caret-users

Re: [caret-users] Interspecies comparisons - creating a new atlas for a different primate species

2012-02-17 Thread Tristan Chaplin
Thanks for information, but I must confess I don't understand why you
create the sphere first.  I thought the procedure for atlases was to make a
surface, then resample it as a standard mesh, then do spherical morphing
etc.  Is the idea instead to create the fiducial surface, do spherical
morphing, then align the sphere to one of these standard spheres?

FYI we've already made a fiducial surface with cytoarchitecure as paint.

On Fri, Feb 17, 2012 at 16:08, Timothy Coalson tsc...@mst.edu wrote:

 We have moved away from the 73730 mesh, we are now using a new method to
 generate meshes which results in much more regular node spacing.  Making a
 sphere is actually relatively easy, especially with the new release of
 caret.  The hard part is making it into an atlas, which I defer to someone
 else.  The command:

 caret_command -surface-create-spheres

 Will generate a pair of matched left/right spheres (mirror node
 correspondence, topologies with normals oriented out).  I think that
 command made it into the 5.65 release, if not you can use spec file change
 resolution, and grab just the new sphere, and ditch the rest.  The odd bit
 about spec file change resolution, though, is if you give it an old node
 count, like 73730, it will give you the old sphere (this is in case someone
 is relying on its old behavior).  However, ask it for 73731 nodes, and you
 will get a new highly regular sphere instead (though it won't have 73730
 nodes, because the 73730 node mesh wasn't a regularly divided geodesic
 sphere, but it will give you something close).  If all else fails, there
 are a few spheres in the caret data directory.

 Tim


 On Thu, Feb 16, 2012 at 6:56 PM, Tristan Chaplin 
 tristan.chap...@gmail.com wrote:

 Hi,

 A while back I asked about creating standard mesh of 73,730 nodes,
 similar to what is used for PALS atlas.  I never got a chance to follow it
 up then but I'd like to give it a go now.  It seemed at the time that the
 knowledge for creating such meshes was limited to a select few so if anyone
 has any experience with this or has the contact details of someone I would
 greatly appreciate hearing from them.

 The reason for creating this mesh is for making atlas for the marmoset
 monkey.  We are very interested registering this atlas to the macaque
 monkey and doing analyses similar to Hill et al. (2010).

 Thanks,
 Tristan Chaplin

 On Mon, Feb 7, 2011 at 16:04, Tristan Chaplin 
 tristan.chap...@gmail.comwrote:

 Ok thanks for the information.


 On Fri, Feb 4, 2011 at 03:25, Donna Dierker do...@brainvis.wustl.eduwrote:

 On 02/01/2011 07:31 PM, Tristan Chaplin wrote:
  Hi,
 
  I've been reading about the creation of your atlases, and I see that
  PALS and the macaque atlases have standard size mesh of 73,730 nodes.
   I was wondering, is this the same across species to allow
  interspecies registration?  i.e. is it still possible to do
  interspecies comparisons of other species with different size meshes?
 Possible, but more difficult.  Not to say that achieving vertex
 correspondence across species is trivial.  Interspecies comparisons are
 really hard.  I think David Van Essen is the only one in our lab that is
 doing them, although Matt Glasser might also be doing some.
 
  I was also wondering how the standard mesh was was actually made.  The
  PALS paper refers to the Saad 2004 paper, which I think uses SUMA.
   SUMA has a program called MapIcosahedron to create standard meshes.
   Is this still how you would recommend making a standard mesh?
 Tim Coalson (a student who works summers here) also developed a utility
 that creates meshes of specified resolution.

 Making a standard mesh is not something I ever do.  You do it with a
 specific motivation -- typically some other important data is already
 available on that mesh.  And the way you usually get your data on that
 mesh is to register it to an atlas target already on that mesh.

 If you are talking about creating, say, a sparser mesh for mice/rats,
 then you're out of my orbit.
 
  Thanks,
  Tristan
 
 
 
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 

 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users




 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users



 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Problems after flattening - clear temp files?

2011-06-08 Thread Tristan Chaplin
Hi,

I've been using caret for a while but just started having some trouble with
flattening, I'm hoping someone has a quick technical fix.  I'm using OSX
10.6.7.

After running the morphing, it displays the crossover and errors screen, but
you can't click close, you have to force quit the process.  At this time
caret is using 100% cpu.  The popup also claims there are thousands of
crossovers, when there actually isn't any.

Sometimes a restart fixes this, but not always.

I think I've managed to get caret into a bad state and I need to clear some
cache or temp files.  I tried install a fresh copy of the current version
but it has the same problems (it still remembers all my recent spec files, I
was hoping it would be completely fresh).

Thanks,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Standard mesh of 73,730 nodes.

2011-02-07 Thread Tristan Chaplin
Ok thanks for the information.

On Fri, Feb 4, 2011 at 03:25, Donna Dierker do...@brainvis.wustl.eduwrote:

 On 02/01/2011 07:31 PM, Tristan Chaplin wrote:
  Hi,
 
  I've been reading about the creation of your atlases, and I see that
  PALS and the macaque atlases have standard size mesh of 73,730 nodes.
   I was wondering, is this the same across species to allow
  interspecies registration?  i.e. is it still possible to do
  interspecies comparisons of other species with different size meshes?
 Possible, but more difficult.  Not to say that achieving vertex
 correspondence across species is trivial.  Interspecies comparisons are
 really hard.  I think David Van Essen is the only one in our lab that is
 doing them, although Matt Glasser might also be doing some.
 
  I was also wondering how the standard mesh was was actually made.  The
  PALS paper refers to the Saad 2004 paper, which I think uses SUMA.
   SUMA has a program called MapIcosahedron to create standard meshes.
   Is this still how you would recommend making a standard mesh?
 Tim Coalson (a student who works summers here) also developed a utility
 that creates meshes of specified resolution.

 Making a standard mesh is not something I ever do.  You do it with a
 specific motivation -- typically some other important data is already
 available on that mesh.  And the way you usually get your data on that
 mesh is to register it to an atlas target already on that mesh.

 If you are talking about creating, say, a sparser mesh for mice/rats,
 then you're out of my orbit.
 
  Thanks,
  Tristan
  
 
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 

 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users

___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Sulcal depth from histological slices without an MRI/atlas

2011-01-24 Thread Tristan Chaplin
Hi everyone,

I've made a few models from histological slices and I was wondering if there
is any way of calculating sulcal depth without an MRI volume or atlas.
 There aren't any MRIs for my animals and there aren't any atlases for this
species.

I understand the I need a cerebral hull, which is usually generated by
dilating and eroding the mid thickness of the MRI volume.  I tried to
simulate this by slightly inflating the fiducial surface and saving it as a
vtk file and using it as a cerebral hull.  It works ok, but I was wondering
if anyone knows a better way.

Thanks,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Calculating mid-thickness and accurately drawing borders

2011-01-20 Thread Tristan Chaplin
Hi David,

Thanks for the information.

We had actually been using layer 4 as the surface but I thought that since
atlases are based on MRI mid-thickness it would better to use mid-thickness.
 But I take your point about histology quality and alignment accuracy being
a much bigger source of error.

Thanks for giving me Erin's contact details, I may need to ask her a few
things.

If you have C++ code that can do it, let us know if you're willing to share
 it.


Unfortunately, the cutting code we have is just some ad hoc python scripts
at this stage.  It would be interesting to see if we could clean it up and
redo it in C++ for Caret, but like you we have other priorities.

Thanks,
Tristan

On Tue, Jan 18, 2011 at 16:24, David Van Essen vanes...@wustl.edu wrote:

 Tristan,

 On Jan 17, 2011, at 6:56 AM, Tristan Chaplin wrote:

 Hi,

 I'm working on constructing brain models from histological sections with
 cortical areas demarcated.  Until now we've been writing our own progrms to
 achieve this.  I'd like to start to use Caret for more of this work so
 non-programmers can do it and our data is more compatible with other
 datasets, but I've got a few questions.

 Firstly, with regards to drawing a contour of the cortical mid-thickness,
 the tutorial suggests that you just estimate this visually when drawing the
 contours.  I was wondering, is there a standard procedure or set of tools
 for calculating the mid-thickness given the pial and GM/WM boundaries?

 Not for histological contours.  Unless the mapping between white and pial
 surfaces has already been determined (e.g., via Freesurfer automated
 segmentation) computing the midthickness is a very hard problem.

 In practice, we generally achieve a reasonable outcome by drawing a contour
 along the estimated midpoint between the white matter and pial contours in a
 given section, modulated by the fact that histological layer 4 tends towards
 the pial side in gyri and near the white matter side near sulcal fundi.

 Unless your histological sections are exceptionally well aligned
 (overcoming tissue distortions during histology), the inaccuracies
 associated with imperfect midthickness determination are likely to be small
 on average relative to the typical between-section misalignment.

 Finally on this front, Erin Reid in my lab has been doing contour-based
 reconstructions on a couple of recent projects and has been helping to
 update the old tutorial.  If you have specific technical questions, you can
 contact her:
 Erin Reid e...@brainvis.wustl.edu


 Secondly, with regards to marking cortical areas, normally we would
 represent the area boundaries as points in space (cells in Caret I
 believe) and using our own programs, project to the nearest mesh node.
  Then, for a given cortical area, we would use the Dijkstra algorithm to
 cut out the cortical area by finding the shortest path between it's
 boundary points.


 Sounds nice.

 However it seems that in Caret the method is to view the projected cells on
 a flat map, and draw a border around it to create a paint file.

 Yes, except that Caret now permits borders to be drawn on inflated or other
 closed surfaces, thereby obviating the problems along the cuts in flat maps.

 Is there a more precise way, perhaps similar to the one I described, of
 doing it in Caret?


 Not at present.  It would be nice to have this as an automated
 process/algorithm, but given other priorities it is not something we are
 likely to undertake in the near future.  If you have C++ code that can do
 it, let us know if you're willing to share it.

 David VE


 Thanks,
 Tristan

 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users



 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users


___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Calculating mid-thickness and accurately drawing borders

2011-01-17 Thread Tristan Chaplin
Hi,

I'm working on constructing brain models from histological sections with
cortical areas demarcated.  Until now we've been writing our own progrms to
achieve this.  I'd like to start to use Caret for more of this work so
non-programmers can do it and our data is more compatible with other
datasets, but I've got a few questions.

Firstly, with regards to drawing a contour of the cortical mid-thickness,
the tutorial suggests that you just estimate this visually when drawing the
contours.  I was wondering, is there a standard procedure or set of tools
for calculating the mid-thickness given the pial and GM/WM boundaries?

Secondly, with regards to marking cortical areas, normally we would
represent the area boundaries as points in space (cells in Caret I
believe) and using our own programs, project to the nearest mesh node.
 Then, for a given cortical area, we would use the Dijkstra algorithm to
cut out the cortical area by finding the shortest path between it's
boundary points.  However it seems that in Caret the method is to view the
projected cells on a flat map, and draw a border around it to create a paint
file.  Is there a more precise way, perhaps similar to the one I described,
of doing it in Caret?

Thanks,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


Re: [caret-users] Importing VTK polygon CELL_DATA in Caret

2010-09-28 Thread Tristan Chaplin
Thanks I'll check that out.

I can write a program to convert between two data formats if Caret has some
appropriate file format.  Something where you can assign data to the mesh
polygons, so you can then say colour the surface in caret with that data, as
you would if it was mean curvature etc.  I was hoping it was the cell file?

FYI The problem is I need to cut and then flatten some surface models that
have data associated with the polygons.  The thing is if I import to Caret,
that polygon data is not kept anywhere.  Then when I cut the surface and
flatten, there are less polygons than there used to be, so when I save the
it back as a VTK file I can't even cross reference it to the original VTK
file to look up the polygon data.  Therefore I need to hang on to this
data whilst cutting and flattening.

Thanks,
Tristan

On Tue, Sep 28, 2010 at 23:12, Donna Dierker do...@brainvis.wustl.eduwrote:

 It sounds like you have two kinds of VTK data:  The surfaces, and the cell
 data.  You can certainly import VTK surfaces into Caret, using either the
 File: Open Data File option, or caret_command (command line utility).

 I don't know about VTK cell data.

 Our whole lab is in all-day meetings today and tomorrow, so it might be
 Thursday before we can check on this.

 Enter caret_command -help-full  /tmp/caret_command.txt at a terminal
 window and use a text editor to read /tmp/caret_command.txt and search for
 file-convert.  (If you're on Windows, you'll need to adjust this command
 accordingly.)

  Hi,
 
  I'm pretty new to Caret, so far I've just been using it for
 reconstruction
  from histological slices and flattening.
 
  I have some pre-existing VTK surface models in which the polygons have
  some
  data associated with them, e.g. number tracer labelled cells, visuotopic
  information etc.  In the VTK text file format, it's called CELL_DATA.
 
  When I load these surfaces into Caret, it seems this data is not
 preserved
  anywhere.  However Caret does seem to support this VTK feature when
  exporting - if I colour the surface with a overlay of the mean curvature
  and
  save a VTK file, I see that it writes out the colouring as CELL_DATA in
  the
  VTK file.
 
  My question is, how do I import this polygon data into Caret surfaces?
  I'm
  thinking I need to import it as a different Caret file, such as a Cell
  file.
   Is that correct?  Should I generate a separate file using the
 information
  at
 
 http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/file_formats/file_formats.html
   ?
 
  Thanks,
  Tristan
  ___
  caret-users mailing list
  caret-users@brainvis.wustl.edu
  http://brainvis.wustl.edu/mailman/listinfo/caret-users
 

 ___
 caret-users mailing list
 caret-users@brainvis.wustl.edu
 http://brainvis.wustl.edu/mailman/listinfo/caret-users

___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users


[caret-users] Importing VTK polygon CELL_DATA in Caret

2010-09-27 Thread Tristan Chaplin
Hi,

I'm pretty new to Caret, so far I've just been using it for reconstruction
from histological slices and flattening.

I have some pre-existing VTK surface models in which the polygons have some
data associated with them, e.g. number tracer labelled cells, visuotopic
information etc.  In the VTK text file format, it's called CELL_DATA.

When I load these surfaces into Caret, it seems this data is not preserved
anywhere.  However Caret does seem to support this VTK feature when
exporting - if I colour the surface with a overlay of the mean curvature and
save a VTK file, I see that it writes out the colouring as CELL_DATA in the
VTK file.

My question is, how do I import this polygon data into Caret surfaces?  I'm
thinking I need to import it as a different Caret file, such as a Cell file.
 Is that correct?  Should I generate a separate file using the information
at
http://brainvis.wustl.edu/CaretHelpAccount/caret5_help/file_formats/file_formats.html
 ?

Thanks,
Tristan
___
caret-users mailing list
caret-users@brainvis.wustl.edu
http://brainvis.wustl.edu/mailman/listinfo/caret-users