Dear all,

my apologies in case this is not the right list to ask my question.

I recently attended a presentation given by David, talking about functional
connectivity analysis performed on the surface. To me that made a lot of
sense and caught my interest. I downloaded the connectome-workbench
application and the HCP tutorial dataset [1]. Thanks for making this freely
available, the application looks very promising.

The HCP tutorial dataset contains a functional gifti file
"rfMRI_REST1_LR_s2.atlasroi.L.32k_fs_LR.func.gii" with 32492 vertices and
1200 time-points.

This is the point where I was wondering, how you guys managed to reduce the
number of vertices to such a low number (32k per hemisphere). In my
pipeline, using freesurfer [2], I end up with about 127k vertices, which
makes the analysis computationally much more demanding.

I assume, that the analysis scripts are not freely available yet, but it
would help me already to get an idea whether the processing contains any
downsampling steps of the data (e.g. in freesurfer it is possible to
decimate the mesh in order to reduce the number of vertices).

Any pointer into that direction would be of great help and much appreciated.

Related to that, I noticed that the provided time-series data contains in
the 2nd half (>Volume 600) some very "ugly" looking spikes [3]. But I
assume you guys are already aware of that.

regards,
 Matthias


[1] http://humanconnectome.org/connectome/get-connectome-workbench.html
[2] this basically involves surface reconstruction of the T1 (recon-all)
followed by a projection of the functional data onto the white matter mesh
(mri_surf2surf)
[3] https://dl.dropboxusercontent.com/u/38470419/spikes.png
_______________________________________________
caret-users mailing list
[email protected]
http://brainvis.wustl.edu/mailman/listinfo/caret-users

Reply via email to