Thank you for the help!

To clarify, by "native space" I had meant "without spatial normalization 
to MNI space". But on reflection that's probably not the proper way to 
think about or describe this data. As I understand it, when we download 
the preprocessed functional data we get two versions of the same 
information: volumetric (e.g. tfMRI_MOTOR_LR.nii.gz) and surface (by 
grayordinates, e.g. tfMRI_MOTOR_LR_Atlas.dtseries.nii).

These correspond for timepoints: the 4th dimension in the .nii.gz has 
284 images, the dtseries.nii time course chart (in the Workbench) has 
284 points along the x-axis, and the text file the code you provided 
creates has 284 lines. But, there won't be 1-to-1 mapping between 
timeseries in the .nii.gz and dtseries.nii, because the first is 
summarized into voxels and the second is surface-based.

To my next question, is it possible to extract not the timeseries 
averaged over all ROI grayordinates (-cifti-roi-average), but for each 
individual grayordinate specified in the ROI mask? If so, how is the 
adjacency specified? (e.g grayordinate 1 shares a side with 
grayordinates 2, 3, and 4).

For context, I wish to perform a ROI or searchlight-type MVPA on data 
from the HCP. To do this with 4d volumetric images I first get 3d masks 
of the voxels to include, then extract the timeseries for those voxels 
(e.g. as a text matrix), and can use the coordinates to get adjacency 
information. While I already can use the .nii.gz images for this 
analysis, I would like to perform it surface-wise as well.

thanks again,
Jo
_______________________________________________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to