Re: [HCP-Users] Statistical comparison of whole brain (surface "voxels" + subcortical / cerebellar voxels) connectivity between two explicitly defined voxels

2017-07-17 Thread Timothy Coalson
On Mon, Jul 17, 2017 at 6:21 PM, Regner, Michael <
michael.reg...@ucdenver.edu> wrote:

> Hello Matt and HCP Community,
>
>
>
> Thank you for the helpful e-mail and words of encouragement. We are very
> encouraged by the data / results we can view in the Connectome Workbench;
> however, statistical analysis has proved challenging.
>
>
>
> Just to reiterate what we are attempting to do:  we are hoping to compare
> whole-brain (all ~91k brainordinates) connectivity between two predefined
> brainordinates to determine the areas of the brain in which there is a
> statistically significant difference in connectivity.  We intend to
> eventually extend this analysis to compare two different ROIs (sets or
> masks of brainordinates); however, for simplicity’s sake (and to prove to
> our PI that we can do this) we would like to perform this comparison first
> between two voxels / vertices (henceforth brainordinates “A” and “B”).
>
>
>
> The general pipeline (using the 33 GB resting state dense connectome CIFTI
> file as our starting point) is as follows step-by-step:
>
> 1.Reduce the resting state dense connectivity CIFT file (91k x 91k)
> into two separate CIFTI files, which are NOT dense.
>
They would in fact still be dense, just on one dimension of the matrix,
rather than both.  Read this if you haven't yet (and let me know if it
doesn't explain it well enough):

http://www.humanconnectome.org/software/workbench-command/-cifti-help

  These CIFTI files (“A” and “B”) would contain the whole-brain
> connectivity data from two a priori specified brainordinates (“A” and
> “B”).  By size, these intuitively should be around 91k in number of
> brainordinates.  We are not exactly sure how to best achieve this in
> wb_command… would “CIFTI-parcellate” do the trick?
>
Honestly, you might as well skip straight to using ROIs, especially if they
are from an existing parcellation, as it may actually be easier (and more
closely relates to the commands you would need to use).  When you want to
compare connectivity of two ROIs, it is highly advisable to get the ROI's
average timeseries first, and then do a fresh correlation to the timeseries
- this removes a large amount of noise-based variance from the denominator
(among other things).

So, if you are in fact using an existing parcellation in cifti dlabel
format, you would want to do -cifti-parcellate on a dtseries file, and then
you can do -cifti-cross-correlation with the dtseries file and this new
parcellated file (ptseries).  If you want to view the per-parcel dense maps
in workbench, you should have the ptseries file as the first input, and
name the output ending in ".pdconn.nii".  To arrange them similarly to a
dscalar file, which is probably what PALM expects, you can use
-cifti-transpose to turn it into a .dpconn.nii file.

If you want to use arbitrary ROI files (which can overlap), then you
instead need to use -cifti-average-roi-correlation on the dtseries file.
Note that this also performs a fisher small-z transform on the correlation
(as it was written to also average across results from multiple files).

>   Or, can we read the CIFTI data into MATLAB and simply select an
> individual row corresponding to the brainordinate of interest?  If you have
> any thoughts as to the most direct way to do this, it would be appreciated.
>
> 2.Apply “CIFTI-separate” to both CIFTI files A and B.  This should
> result in two 2D surface maps (for the left and right hemispheres) in NII
> format for both brainordinates “A” and “B”, along with their corresponding
> GIFTI files to register them in space.
>
I think PALM can accept cifti files as-is, so you may not want to do this.
You may need to either extract or concatenate the cifti maps of interest to
satisfy PALM's input syntax, which you can do with -cifti-merge.

The single-hemisphere-only surface format is GIFTI, ending in .gii
(.func.gii for data values).  The surface geometry is not inside the CIFTI
files (it is in .surf.gii files).  See
http://www.humanconnectome.org/software/workbench-command/-gifti-help .

> A single 3D volume map of the subcortical / cerebellar brainordinates
> should also result from this for both brainordinates “A” and “B”.  So, we
> end up with 6 NIFTI maps and 4 GIFTI files.
>
For each ROI, two .func.gii files and one volume .nii.gz file.  The surface
files exist elsewhere in the subject directory (and could be useful for the
ROI averaging or parcellating step, to account for differences in vertex
sizes).

> 3.Apply FSL PALM with TFCE correction (syntax similar to randomize)
> to each of the three pairs of maps produced in #2 (left cortex, right
> cortex, and subcortical) to compare “A” versus “B.”  We will need to
> include the midthickness file as an argument for the 2D surface data, in
> order for PALM to appropriately correct for volume.  This should result in
> three “A > B” contrast maps (left cortex, right cortex, subcortical).
>
In order to do permutation-based statistics (which 

Re: [HCP-Users] Statistical comparison of whole brain (surface "voxels" + subcortical / cerebellar voxels) connectivity between two explicitly defined voxels

2017-07-17 Thread Regner, Michael
Hello Matt and HCP Community,

Thank you for the helpful e-mail and words of encouragement. We are very 
encouraged by the data / results we can view in the Connectome Workbench; 
however, statistical analysis has proved challenging.

Just to reiterate what we are attempting to do:  we are hoping to compare 
whole-brain (all ~91k brainordinates) connectivity between two predefined 
brainordinates to determine the areas of the brain in which there is a 
statistically significant difference in connectivity.  We intend to eventually 
extend this analysis to compare two different ROIs (sets or masks of 
brainordinates); however, for simplicity's sake (and to prove to our PI that we 
can do this) we would like to perform this comparison first between two voxels 
/ vertices (henceforth brainordinates "A" and "B").

The general pipeline (using the 33 GB resting state dense connectome CIFTI file 
as our starting point) is as follows step-by-step:

1.Reduce the resting state dense connectivity CIFT file (91k x 91k) into 
two separate CIFTI files, which are NOT dense.  These CIFTI files ("A" and "B") 
would contain the whole-brain connectivity data from two a priori specified 
brainordinates ("A" and "B").  By size, these intuitively should be around 91k 
in number of brainordinates.  We are not exactly sure how to best achieve this 
in wb_command... would "CIFTI-parcellate" do the trick?  Or, can we read the 
CIFTI data into MATLAB and simply select an individual row corresponding to the 
brainordinate of interest?  If you have any thoughts as to the most direct way 
to do this, it would be appreciated.

2.Apply "CIFTI-separate" to both CIFTI files A and B.  This should result 
in two 2D surface maps (for the left and right hemispheres) in NII format for 
both brainordinates "A" and "B", along with their corresponding GIFTI files to 
register them in space.  A single 3D volume map of the subcortical / cerebellar 
brainordinates should also result from this for both brainordinates "A" and 
"B".  So, we end up with 6 NIFTI maps and 4 GIFTI files.

3.Apply FSL PALM with TFCE correction (syntax similar to randomize) to each 
of the three pairs of maps produced in #2 (left cortex, right cortex, and 
subcortical) to compare "A" versus "B."  We will need to include the 
midthickness file as an argument for the 2D surface data, in order for PALM to 
appropriately correct for volume.  This should result in three "A > B" contrast 
maps (left cortex, right cortex, subcortical).

4.Use wb_command to reconstruct whole brain CIFTI files from the three 
contrast maps produced in #3, which can then be viewed in Connectome Workbench 
for inspection.  We hope that this will provide areas of the brain in which 
connectivity between brainordinate "A" and "B" are significantly different, 
with the appropriate correction for multiple comparisons in #3.

Any comments / suggestions / heckling would be sincerely appreciated.  Thank 
you in advance!

Mike

Michael F. Regner, M.D.
Departments of Radiology and Bioengineering
University of Colorado - Denver
E-mail: michael.reg...@ucdenver.edu

From: Glasser, Matthew [mailto:glass...@wustl.edu]
Sent: Monday, July 10, 2017 8:15 PM
To: Regner, Michael ; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Statistical comparison of whole brain (surface 
"voxels" + subcortical / cerebellar voxels) connectivity between two explicitly 
defined voxels

Hi Michael,

I'm happy to hear you are making good progress with CIFTI and Connectome 
Workbench.  You can indeed use the PALM software to do statistical inference on 
CIFTI data, and that is the tool we recommend.
Peace,

Matt.

From: 
>
 on behalf of "Regner, Michael" 
>
Date: Monday, July 10, 2017 at 2:40 AM
To: "hcp-users@humanconnectome.org" 
>
Subject: [HCP-Users] Statistical comparison of whole brain (surface "voxels" + 
subcortical / cerebellar voxels) connectivity between two explicitly defined 
voxels

Dear HCP Community,

I am relatively new to the HCP data and Connectome Workbench.  Our 
neuroradiology laboratory at the University of Colorado is beginning to use it. 
 It has already proved to be extremely helpful in aiding the interpretation of 
existing resting state results.

My question is:  does the HCP Connectome Workbench provide an internal 
mechanism for the generation of statistical comparisons / contrasts within or 
between CIFTI files?  I am exploring the dense resting state connectivity data. 
 My goal is to construct a contrast map for statistical inference.  I would 
like to compare whole brain (surface "voxels" + subcortical / cerebellar 
voxels) connectivity between two explicitly defined 

Re: [HCP-Users] mapping HCP data into 7 functional networks (using Thomas Yeo parcellation)

2017-07-17 Thread Timothy Coalson
As Matt says, 32k is used to roughly match the acquisition resolution.  The
7T data at 1.6mm is on a 59k mesh for the same reasons.  If you want to
resample them, see the -cifti-resample command (or -metric-resample for a
simpler command that does the same surface operation on a different format).

Side note though, in order to get the data onto 32k or 59k, we do in fact
first map the volume data to the native freesurfer mesh for the subject
(~140k if memory serves), in order to use the surfaces that contain the
original folding detail (even though 32k surfaces still have very good
folding detail).  After that, we downsample them on the surface to the more
sensible resolution.

Tim


On Mon, Jul 17, 2017 at 6:44 PM, Glasser, Matthew 
wrote:

> No that would be a massive oversampling of the data.  The data are
> acquired at 2mm isotropic, which is roughly the vertex spacing of the 32k
> mesh.  164k files would be much larger for no benefit.  If you want to
> upsample particular maps to 164k (or downsample something to 32k) that is
> trivial to do.
>
> Peace,
>
> Matt.
>
> From:  on behalf of David Hartman <
> dhartman1...@gmail.com>
> Date: Monday, July 17, 2017 at 6:24 PM
> To: Timothy Coalson 
> Cc: "hcp-users@humanconnectome.org" 
> Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks
> (using Thomas Yeo parcellation)
>
> Yes it does seem the first of four columns has 8 distinct numbers.
>
> As a side note, is there any released HCP resting state data on a 164k
> mesh or are they all 32k mesh?
>
> Thank you,
> David Hartman
>
> On Mon, Jul 17, 2017 at 6:19 PM, Timothy Coalson  wrote:
>
>> You can get a text file containing the label names and keys for a map by
>> using -cifti-label-export-table.  It appears that this will have extraneous
>> labels in it due to how the file was generated, but you can ignore the key
>> values that aren't used.
>>
>> If you count the number of unique values in one column of the cifti file
>> you opened, you will probably get the number you are expecting.  It is
>> simply that the values are not 1-7, but an arbitrary set of integers.
>>
>> The -cifti-parcellate command will automatically use the first map in the
>> label file to compute within-label averages for its labels.  It is probably
>> not what you want to use to simply examine the label map.  I'm not sure why
>> it ended up with 8 rather than 7 parcels, though.  As a side effect, you
>> could use "wb_command -cifti-parcel-mapping-to-label" to generate a
>> label file that should be much less cluttered, but we may have better
>> advice on this soon...
>>
>> Tim
>>
>>
>> On Mon, Jul 17, 2017 at 4:51 PM, David Hartman 
>> wrote:
>>
>>> Since calling in MATLAB: “ciftiopen('RSN-networks.32k_fs_LR.dlabel.nii',
>>> 'workbench\bin_windows64\wb_command.exe')” gives me a matrix with 4
>>> columns whose range is beyond the 17 or 7 corresponding to their functional
>>> grouping, I have resorted to using wb-command hoping to get right
>>> labelling.
>>>
>>>
>>>
>>> *Question:* Which wb-command should I use to read the labels and see
>>> for the 32k mesh which networks the nodes belong to (ie. 1 to 7)?
>>>
>>>
>>>
>>> I tried using wb_command -cifti-parcellate
>>> rfMRI_REST1_LR_Atlas_MSMAll.dtseries.nii  RSN-networks.32k_fs_LR.dlabel.nii
>>> COLUMN out.ptseries.nii, but out.ptseries.nii returns a matrix 8×1200.
>>> But I am hoping for something  32k×1 for the correct labelling, where
>>> each row is a number between 1 and 7 or 17 corresponding to the group the
>>> node belongs to.
>>>
>>>
>>>
>>>
>>>
>>> Hope it is not too confusing.
>>>
>>>
>>>
>>> Thank you,
>>>
>>> David Hartman
>>>
>>> On Mon, Jul 17, 2017 at 3:09 PM, Harms, Michael 
>>> wrote:
>>>

 Hi,
 There are actually 4 different maps in that file.  If you load it into
 Workbench, the name associated with each map tells you what each map is.

 cheers,
 -MH

 --
 Michael Harms, Ph.D.
 ---
 Conte Center for the Neuroscience of Mental Disorders
 Washington University School of Medicine
 Department of Psychiatry, Box 8134
 660 South Euclid Ave.Tel: 314-747-6173 <(314)%20747-6173>
 St. Louis, MO  63110Email: mha...@wustl.edu

 From:  on behalf of David
 Hartman 
 Date: Monday, July 17, 2017 at 12:21 PM
 To: "hcp-users@humanconnectome.org" 
 Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks
 (using Thomas Yeo parcellation)

 Hi,



 *Background:*

 Regarding the labels file, “RSN-networks.32k_fs_LR.dlabel.nii” which I
 thought should contain the 7 and 17 network parcellation, this file has a
 matrix of 

Re: [HCP-Users] mapping HCP data into 7 functional networks (using Thomas Yeo parcellation)

2017-07-17 Thread David Hartman
Yes it does seem the first of four columns has 8 distinct numbers.

As a side note, is there any released HCP resting state data on a 164k mesh
or are they all 32k mesh?

Thank you,
David Hartman

On Mon, Jul 17, 2017 at 6:19 PM, Timothy Coalson  wrote:

> You can get a text file containing the label names and keys for a map by
> using -cifti-label-export-table.  It appears that this will have extraneous
> labels in it due to how the file was generated, but you can ignore the key
> values that aren't used.
>
> If you count the number of unique values in one column of the cifti file
> you opened, you will probably get the number you are expecting.  It is
> simply that the values are not 1-7, but an arbitrary set of integers.
>
> The -cifti-parcellate command will automatically use the first map in the
> label file to compute within-label averages for its labels.  It is probably
> not what you want to use to simply examine the label map.  I'm not sure why
> it ended up with 8 rather than 7 parcels, though.  As a side effect, you
> could use "wb_command -cifti-parcel-mapping-to-label" to generate a label
> file that should be much less cluttered, but we may have better advice on
> this soon...
>
> Tim
>
>
> On Mon, Jul 17, 2017 at 4:51 PM, David Hartman 
> wrote:
>
>> Since calling in MATLAB: “ciftiopen('RSN-networks.32k_fs_LR.dlabel.nii',
>> 'workbench\bin_windows64\wb_command.exe')” gives me a matrix with 4
>> columns whose range is beyond the 17 or 7 corresponding to their functional
>> grouping, I have resorted to using wb-command hoping to get right
>> labelling.
>>
>>
>>
>> *Question:* Which wb-command should I use to read the labels and see for
>> the 32k mesh which networks the nodes belong to (ie. 1 to 7)?
>>
>>
>>
>> I tried using wb_command -cifti-parcellate rfMRI_REST1_LR_Atlas_MSMAll.dt
>> series.nii  RSN-networks.32k_fs_LR.dlabel.nii COLUMN out.ptseries.nii,
>> but out.ptseries.nii returns a matrix 8×1200. But I am hoping for
>> something  32k×1 for the correct labelling, where each row is a number
>> between 1 and 7 or 17 corresponding to the group the node belongs to.
>>
>>
>>
>>
>>
>> Hope it is not too confusing.
>>
>>
>>
>> Thank you,
>>
>> David Hartman
>>
>> On Mon, Jul 17, 2017 at 3:09 PM, Harms, Michael  wrote:
>>
>>>
>>> Hi,
>>> There are actually 4 different maps in that file.  If you load it into
>>> Workbench, the name associated with each map tells you what each map is.
>>>
>>> cheers,
>>> -MH
>>>
>>> --
>>> Michael Harms, Ph.D.
>>> ---
>>> Conte Center for the Neuroscience of Mental Disorders
>>> Washington University School of Medicine
>>> Department of Psychiatry, Box 8134
>>> 660 South Euclid Ave. Tel: 314-747-6173 <(314)%20747-6173>
>>> St. Louis, MO  63110 Email: mha...@wustl.edu
>>>
>>> From:  on behalf of David
>>> Hartman 
>>> Date: Monday, July 17, 2017 at 12:21 PM
>>> To: "hcp-users@humanconnectome.org" 
>>> Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks
>>> (using Thomas Yeo parcellation)
>>>
>>> Hi,
>>>
>>>
>>>
>>> *Background:*
>>>
>>> Regarding the labels file, “RSN-networks.32k_fs_LR.dlabel.nii” which I
>>> thought should contain the 7 and 17 network parcellation, this file has a
>>> matrix of size 64984×4. What do the numbers in the 4 columns represent (ie.
>>> 1st column has a max of 44 and 4th column a max of 26). I was expecting
>>> a single column that took values from 1 to 17 or 1 to 7 mapping each vertex
>>> to its grouping in the functional networks.
>>>
>>>
>>>
>>>
>>>
>>> *Question:*
>>>
>>> How should I understand these 4 columns and their connection to
>>> functional network parcellation?
>>>
>>>
>>>
>>> Thank you,
>>>
>>> David Hartman
>>>
>>> On Fri, Jul 14, 2017 at 2:27 PM, David Hartman 
>>> wrote:
>>>
 Hi,



 *Background:*

 Regarding the parcellation of the cortex into functional networks (“The
 organization of the human cerebral cortex estimated by intrinsic functional
 connectivity,” Yeo et al.) Yeo breaks up the cortex into 7 networks.
 However, his cortical data has 163842 vertices, while the HCP data only has
 59412 vertices.



 *Question:*

 I am looking to map the HCP data into these 7 networks, but I don’t see
 a way to get the data into the same format as Yeo’s data (ie. 163842
 vertices) to use his mapping.

 1.  Does anyone know of a way to convert HCP data into the same format
 as Yeo’s data to use his mapping or a direct way to map the HCP data to 7
 networks?



 Any help would be much appreciated.



 Thank you,

 David Hartman


>>> ___
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> 

Re: [HCP-Users] mapping HCP data into 7 functional networks (using Thomas Yeo parcellation)

2017-07-17 Thread David Hartman
Since calling in MATLAB: “ciftiopen('RSN-networks.32k_fs_LR.dlabel.nii',
'workbench\bin_windows64\wb_command.exe')” gives me a matrix with 4 columns
whose range is beyond the 17 or 7 corresponding to their functional
grouping, I have resorted to using wb-command hoping to get right
labelling.



*Question:* Which wb-command should I use to read the labels and see for
the 32k mesh which networks the nodes belong to (ie. 1 to 7)?



I tried using wb_command -cifti-parcellate
rfMRI_REST1_LR_Atlas_MSMAll.dtseries.nii  RSN-networks.32k_fs_LR.dlabel.nii
COLUMN out.ptseries.nii, but out.ptseries.nii returns a matrix 8×1200. But
I am hoping for something  32k×1 for the correct labelling, where each row
is a number between 1 and 7 or 17 corresponding to the group the node
belongs to.





Hope it is not too confusing.



Thank you,

David Hartman

On Mon, Jul 17, 2017 at 3:09 PM, Harms, Michael  wrote:

>
> Hi,
> There are actually 4 different maps in that file.  If you load it into
> Workbench, the name associated with each map tells you what each map is.
>
> cheers,
> -MH
>
> --
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave. Tel: 314-747-6173 <(314)%20747-6173>
> St. Louis, MO  63110 Email: mha...@wustl.edu
>
> From:  on behalf of David Hartman <
> dhartman1...@gmail.com>
> Date: Monday, July 17, 2017 at 12:21 PM
> To: "hcp-users@humanconnectome.org" 
> Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks
> (using Thomas Yeo parcellation)
>
> Hi,
>
>
>
> *Background:*
>
> Regarding the labels file, “RSN-networks.32k_fs_LR.dlabel.nii” which I
> thought should contain the 7 and 17 network parcellation, this file has a
> matrix of size 64984×4. What do the numbers in the 4 columns represent (ie.
> 1st column has a max of 44 and 4th column a max of 26). I was expecting a
> single column that took values from 1 to 17 or 1 to 7 mapping each vertex
> to its grouping in the functional networks.
>
>
>
>
>
> *Question:*
>
> How should I understand these 4 columns and their connection to functional
> network parcellation?
>
>
>
> Thank you,
>
> David Hartman
>
> On Fri, Jul 14, 2017 at 2:27 PM, David Hartman 
> wrote:
>
>> Hi,
>>
>>
>>
>> *Background:*
>>
>> Regarding the parcellation of the cortex into functional networks (“The
>> organization of the human cerebral cortex estimated by intrinsic functional
>> connectivity,” Yeo et al.) Yeo breaks up the cortex into 7 networks.
>> However, his cortical data has 163842 vertices, while the HCP data only has
>> 59412 vertices.
>>
>>
>>
>> *Question:*
>>
>> I am looking to map the HCP data into these 7 networks, but I don’t see a
>> way to get the data into the same format as Yeo’s data (ie. 163842
>> vertices) to use his mapping.
>>
>> 1.  Does anyone know of a way to convert HCP data into the same format
>> as Yeo’s data to use his mapping or a direct way to map the HCP data to 7
>> networks?
>>
>>
>>
>> Any help would be much appreciated.
>>
>>
>>
>> Thank you,
>>
>> David Hartman
>>
>>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> --
>
> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If you
> are not the intended recipient, be advised that any unauthorized use,
> disclosure, copying or the taking of any action in reliance on the contents
> of this information is strictly prohibited. If you have received this email
> in error, please immediately notify the sender via telephone or return mail.
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] mapping HCP data into 7 functional networks (using Thomas Yeo parcellation)

2017-07-17 Thread Harms, Michael

Hi,
There are actually 4 different maps in that file.  If you load it into 
Workbench, the name associated with each map tells you what each map is.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
>
 on behalf of David Hartman 
>
Date: Monday, July 17, 2017 at 12:21 PM
To: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks (using 
Thomas Yeo parcellation)


Hi,



Background:

Regarding the labels file, “RSN-networks.32k_fs_LR.dlabel.nii” which I thought 
should contain the 7 and 17 network parcellation, this file has a matrix of 
size 64984×4. What do the numbers in the 4 columns represent (ie. 1st column 
has a max of 44 and 4th column a max of 26). I was expecting a single column 
that took values from 1 to 17 or 1 to 7 mapping each vertex to its grouping in 
the functional networks.





Question:

How should I understand these 4 columns and their connection to functional 
network parcellation?



Thank you,

David Hartman

On Fri, Jul 14, 2017 at 2:27 PM, David Hartman 
> wrote:

Hi,



Background:

Regarding the parcellation of the cortex into functional networks (“The 
organization of the human cerebral cortex estimated by intrinsic functional 
connectivity,” Yeo et al.) Yeo breaks up the cortex into 7 networks. However, 
his cortical data has 163842 vertices, while the HCP data only has 59412 
vertices.



Question:

I am looking to map the HCP data into these 7 networks, but I don’t see a way 
to get the data into the same format as Yeo’s data (ie. 163842 vertices) to use 
his mapping.

1.  Does anyone know of a way to convert HCP data into the same format as Yeo’s 
data to use his mapping or a direct way to map the HCP data to 7 networks?



Any help would be much appreciated.



Thank you,

David Hartman



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] diffusion data merge pipeline

2017-07-17 Thread Harms, Michael

Hi,
Is there a particular reason that you can’t provide all the dMRI scans at once, 
and let the pipeline handle the merging for you?
If you process each dMRI run separately, then the individual runs will not be 
in optimal alignment.  (You would be relying on the registration of each run to 
the T1, rather than registering the dMRI directly to each other as part of 
‘eddy’).

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
>
 on behalf of Yeun Kim >
Date: Monday, July 17, 2017 at 1:32 PM
To: "Glasser, Matthew" >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] diffusion data merge pipeline

I am using the following function (and is looped through the pairs of unique 
sets of gradient tables (i.e. loops twice for dir99 and dir98):
${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
  --posData="{posData}" \
  --negData="{negData}"  \
  --path="{path}" \
  --subject="{subject}"  \
  --echospacing="{echospacing}"  \
  --PEdir={PEdir}  \
  --gdcoeffs="NONE"  \
  --dwiname="{dwiname}"  \
  --printcom=""'

Where:
$posData = diffusion data in the positive direction
$negData = diffusion data in the negative direction
$path = output directory path
$echospacing = echospacing
$PEdir = 2
$dwiname = i.e. Diffusion_dir-98_run-01


FYI: I'm using HCPPipelines v3.17.

-

Technical details:

I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a Docker 
container with the following python code. It is looped through the pairs of 
unique sets of gradient tables (i.e. loops twice for dir99 and dir98) and set 
to process in parallel:

dwi_stage_dict = OrderedDict([("DiffusionPreprocessing", 
partial(run_diffusion_processsing,

 posData=pos,

 negData=neg,

 path=args.output_dir,

 subject="sub-%s" % subject_label,

 echospacing=echospacing,

 PEdir=PEdir,

 gdcoeffs="NONE",

 dwiname=dwiname,

 n_cpus=args.n_cpus))])
for stage, stage_func in dwi_stage_dict.iteritems():
if stage in args.stages:
Process(target=stage_func).start()

On Mon, Jul 17, 2017 at 11:15 AM, Glasser, Matthew 
> wrote:
The pipeline is capable of doing the merge for you if you want.  Can you post 
how you called the diffusion pipeline?

Peace,

Matt.

From: Yeun Kim >
Date: Monday, July 17, 2017 at 1:12 PM
To: Matt Glasser >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] diffusion data merge pipeline

When I run DiffusionPreprocessing, I make the --dwiname=DWIName specific to the 
diffusion scan (i.e. DWIName= Diffusion_dir-98_run-01) to prevent files from 
being overwritten.
I end up with:
${StudyFolder}/${Subject}/T1w/Diffusion_dir-98_run-01/data.nii.gz
${StudyFolder}/${Subject}/T1w/Diffusion_dir-99_run-01/data.nii.gz

I would like to combine the two data.nii.gz files.

On Mon, Jul 17, 2017 at 10:58 AM, Glasser, Matthew 
> wrote:
Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.

Peace,

Matt.

From: 
>
 on behalf of Yeun Kim >
Date: Monday, July 17, 2017 at 12:56 PM
To: 

Re: [HCP-Users] diffusion data merge pipeline

2017-07-17 Thread Yeun Kim
I am using the following function (and is looped through the pairs of
unique sets of gradient tables (i.e. loops twice for dir99 and dir98):
${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
  --posData="{posData}" \
  --negData="{negData}"  \
  --path="{path}" \
  --subject="{subject}"  \
  --echospacing="{echospacing}"  \
  --PEdir={PEdir}  \
  --gdcoeffs="NONE"  \
  --dwiname="{dwiname}"  \
  --printcom=""'

Where:
$posData = diffusion data in the positive direction
$negData = diffusion data in the negative direction
$path = output directory path
$echospacing = echospacing
$PEdir = 2
$dwiname = i.e. Diffusion_dir-98_run-01


FYI: I'm using HCPPipelines v3.17.

-

Technical details:

I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a
Docker container with the following python code. It is looped through the
pairs of unique sets of gradient tables (i.e. loops twice for dir99 and
dir98) and set to process in parallel:

dwi_stage_dict = OrderedDict([("DiffusionPreprocessing",
partial(run_diffusion_processsing,

 *posData*=pos,

 *negData*=neg,

 *path*=args.output_dir,

 *subject*="sub-%s" % subject_label,

 *echospacing*=echospacing,

 *PEdir*=PEdir,

 *gdcoeffs*="NONE",

 *dwiname*=dwiname,

 *n_cpus*=args.n_cpus))])
for stage, stage_func in
dwi_stage_dict.iteritems():
if stage in args.stages:
Process(target=stage_func).start()

On Mon, Jul 17, 2017 at 11:15 AM, Glasser, Matthew 
wrote:

> The pipeline is capable of doing the merge for you if you want.  Can you
> post how you called the diffusion pipeline?
>
> Peace,
>
> Matt.
>
> From: Yeun Kim 
> Date: Monday, July 17, 2017 at 1:12 PM
> To: Matt Glasser 
> Cc: "hcp-users@humanconnectome.org" 
> Subject: Re: [HCP-Users] diffusion data merge pipeline
>
> When I run DiffusionPreprocessing, I make the --dwiname=DWIName specific
> to the diffusion scan (i.e. DWIName= Diffusion_dir-98_run-01) to prevent
> files from being overwritten.
> I end up with:
> ${StudyFolder}/${Subject}/T1w/Diffusion_dir-98_run-01/data.nii.gz
> ${StudyFolder}/${Subject}/T1w/Diffusion_dir-99_run-01/data.nii.gz
>
> I would like to combine the two data.nii.gz files.
>
> On Mon, Jul 17, 2017 at 10:58 AM, Glasser, Matthew 
> wrote:
>
>> Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.
>>
>> Peace,
>>
>> Matt.
>>
>> From:  on behalf of Yeun Kim <
>> yeun...@gmail.com>
>> Date: Monday, July 17, 2017 at 12:56 PM
>> To: "hcp-users@humanconnectome.org" 
>> Subject: [HCP-Users] diffusion data merge pipeline
>>
>> Hi,
>>
>> We have the diffusion scans:
>> dMRI_dir98_AP, dMRI_dir98_PA
>> dMRI_dir99_AP, dMRI_dir99_PA
>>
>> in which there is a pair of phase encoding directions (AP,PA) and two
>> sets of different diffusion weighting directions (dir98 and dir99).
>>
>> After running the DiffusionPreprocessing module of the HCP minimal
>> preprocessing pipeline, I would like to merge the processed dMRI_dir98 and
>> dMRI_dir99 data. Do you have any suggestions on how to perform this step?
>> Also, are there any workflows developed by HCP for
>> post-DiffusionPreprocessing?
>>
>> Thank you,
>> Yeun
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] diffusion data merge pipeline

2017-07-17 Thread Yeun Kim
When I run DiffusionPreprocessing, I make the --dwiname=DWIName specific to
the diffusion scan (i.e. DWIName= Diffusion_dir-98_run-01) to prevent files
from being overwritten.
I end up with:
${StudyFolder}/${Subject}/T1w/Diffusion_dir-98_run-01/data.nii.gz
${StudyFolder}/${Subject}/T1w/Diffusion_dir-99_run-01/data.nii.gz

I would like to combine the two data.nii.gz files.

On Mon, Jul 17, 2017 at 10:58 AM, Glasser, Matthew 
wrote:

> Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.
>
> Peace,
>
> Matt.
>
> From:  on behalf of Yeun Kim <
> yeun...@gmail.com>
> Date: Monday, July 17, 2017 at 12:56 PM
> To: "hcp-users@humanconnectome.org" 
> Subject: [HCP-Users] diffusion data merge pipeline
>
> Hi,
>
> We have the diffusion scans:
> dMRI_dir98_AP, dMRI_dir98_PA
> dMRI_dir99_AP, dMRI_dir99_PA
>
> in which there is a pair of phase encoding directions (AP,PA) and two sets
> of different diffusion weighting directions (dir98 and dir99).
>
> After running the DiffusionPreprocessing module of the HCP minimal
> preprocessing pipeline, I would like to merge the processed dMRI_dir98 and
> dMRI_dir99 data. Do you have any suggestions on how to perform this step?
> Also, are there any workflows developed by HCP for
> post-DiffusionPreprocessing?
>
> Thank you,
> Yeun
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] diffusion data merge pipeline

2017-07-17 Thread Glasser, Matthew
Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.

Peace,

Matt.

From: 
>
 on behalf of Yeun Kim >
Date: Monday, July 17, 2017 at 12:56 PM
To: "hcp-users@humanconnectome.org" 
>
Subject: [HCP-Users] diffusion data merge pipeline

Hi,

We have the diffusion scans:
dMRI_dir98_AP, dMRI_dir98_PA
dMRI_dir99_AP, dMRI_dir99_PA

in which there is a pair of phase encoding directions (AP,PA) and two sets of 
different diffusion weighting directions (dir98 and dir99).

After running the DiffusionPreprocessing module of the HCP minimal 
preprocessing pipeline, I would like to merge the processed dMRI_dir98 and 
dMRI_dir99 data. Do you have any suggestions on how to perform this step? Also, 
are there any workflows developed by HCP for post-DiffusionPreprocessing?

Thank you,
Yeun

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Questions about reconstruction speed of Multi-band EPI sequence in LSCMRR

2017-07-17 Thread Harms, Michael

Hi,
You want the high-end recon computer with the GPU.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: HMZ >
Date: Monday, July 17, 2017 at 8:20 AM
To: Michael Harms >
Cc: "Glasser, Matthew" >, 
"hcp-users@humanconnectome.org" 
>, 葛鉴桥 
>, Jia-Hong Gao 
>, 门卫伟 
>
Subject: Re: [HCP-Users] Questions about reconstruction speed of Multi-band EPI 
sequence in LSCMRR

Dear HCP team,

Thanks a lot for your suggestion and help.

I have tested the reconstruction speed with both head coils. Our protocol is 
similar to "LSCMRR_3T_printout_2014.08.15", similar matrix, voxel size, slice, 
and TR, etc. The reconstruction speed is 1.1s per volume for 64CH and 0.9s for 
32CH, which are slower than TR(0.73s).

We just have a chance to upgrade our SIEMENS Prisma from VD13D to VE11C which 
was recommended by SIEMENS, and we will upgrade the reconstruction computer at 
the same time. SIEMENS provide us two kinds of MR workplace as follows (image 
reconstruction computer), standard and high-end, respectively. I would like to 
know which kind of recontruction workplace as follows are you using? If 
neither, did you modify the reconstruction computer yourselves?

 I would be very grateful if I can have more detail information about your 
prisma reconstruction specifications, so that we can compare and decide what 
kind of MR workplace is suitable for us to run HCP style protocol.

A. The standard Computer HW upgrade kit for the syngo MR Workplace includes:
1. a syngo MR Workplace with 1 x Intel Xeon Quad Core CPU / 3.6 GHz, 8 GB 
RAM,
2. one 300 GB system hard disk,
3. one 300 GB hard disk for image data and
4. one CD-R/DVD-R drive for image storage.


B. The high-end image reconstruction computer has the following specifications:

≥ 2x Intel W5690 (hexacore) processors 3.46 GHz

≥ 128 GB Main Memory (RAM)

≥ 750 GB Hard disk for raw data

≥ 100 GB Hard disk for system software

Tesla GPGPU

 By the way, the following is the acquisition workplace offered by SIEMENS. I 
would like to know if it is OK for HCP style acquisition? Thanks a lot!
The Computer HW upgrade kit for the syngo Acquisition Workplace includes:
1. a syngo Acquisition Workplace with 1 x Intel Xeon Quad Core CPU / 3.6 
GHz, 32 GB RAM,
2. one 300 GB system hard disk,
3. one 300 GB hard disk for database,
4. one 300 GB hard disk for image data and
5. one CD-R/DVD-R drive for image storage,

Any relevant information and idea would help a lot. Thank you very much!

Looking forward to your reply.

--
Meizhen Han
PhD Candidate
Center for MRI Research
Peking University
Beijing, China

At 2017-07-01 00:03:22, "Harms, Michael" 
> wrote:

We have a more recent (and importable) protocol available here that I would 
suggest you use as a starting point:
http://protocols.humanconnectome.org/CCF/

Your Prisma should already come with GPUs.  Are you using the 64 channel coil?  
That will be a slower recon than the 32 ch coil, but I believe the recon still 
shouldn’t get too far behind even with the 64 ch coil.

Try importing the VD13D version of the protocol available above, and compare 
the recon times for both the 32 and 64 ch coils, if you have both available.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
>
 on behalf of "Glasser, Matthew" >
Date: Friday, June 30, 2017 at 5:52 AM
To: HMZ >, 
"hcp-users@humanconnectome.org" 
>
Cc: 葛鉴桥 >, 周思中 
>
Subject: Re: [HCP-Users] Questions about reconstruction speed of Multi-band EPI 
sequence in LSCMRR

I believe we use GPUs to speed this up.

Peace,

Matt.

From: 
>
 on behalf of HMZ