Re: [HCP-Users] mapping HCP data into 7 functional networks (using Thomas Yeo parcellation)

2017-07-18 Thread Glasser, Matthew
There are folks working on this.  The Yeo parcellation was made using differing 
surface registration, and also it is a “winner take all” clustering 
parcellation, rather than a gradient-based parcellation.  This means that 
relatively subtle differences in fc could occur on either side of some borders. 
 The parcellation agrees with the resting state gradients found in HCP data 
however (i.e. parcels do not cross regions of strongly differing functional 
connectivity).

Peace,

Matt.

From: David Hartman >
Date: Tuesday, July 18, 2017 at 6:45 PM
To: Matt Glasser >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks (using 
Thomas Yeo parcellation)

I noticed that the parcellation of the nodes into 360 ROIs (Glasser 2016) does 
not "fit" well into networks (ie, Yeo parcellation). There are nodes in a given 
ROI that belong to multiple networks. Are you aware of any work that has tried 
to bridge your ROI parcellation with some of the network parcellations or is 
this something any of the folks at HCP have explored?

Thank you,
David Hartman

On Mon, Jul 17, 2017 at 7:55 PM, Timothy Coalson 
> wrote:
As Matt says, 32k is used to roughly match the acquisition resolution.  The 7T 
data at 1.6mm is on a 59k mesh for the same reasons.  If you want to resample 
them, see the -cifti-resample command (or -metric-resample for a simpler 
command that does the same surface operation on a different format).

Side note though, in order to get the data onto 32k or 59k, we do in fact first 
map the volume data to the native freesurfer mesh for the subject (~140k if 
memory serves), in order to use the surfaces that contain the original folding 
detail (even though 32k surfaces still have very good folding detail).  After 
that, we downsample them on the surface to the more sensible resolution.

Tim


On Mon, Jul 17, 2017 at 6:44 PM, Glasser, Matthew 
> wrote:
No that would be a massive oversampling of the data.  The data are acquired at 
2mm isotropic, which is roughly the vertex spacing of the 32k mesh.  164k files 
would be much larger for no benefit.  If you want to upsample particular maps 
to 164k (or downsample something to 32k) that is trivial to do.

Peace,

Matt.

From: 
>
 on behalf of David Hartman 
>
Date: Monday, July 17, 2017 at 6:24 PM
To: Timothy Coalson >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks (using 
Thomas Yeo parcellation)

Yes it does seem the first of four columns has 8 distinct numbers.

As a side note, is there any released HCP resting state data on a 164k mesh or 
are they all 32k mesh?

Thank you,
David Hartman

On Mon, Jul 17, 2017 at 6:19 PM, Timothy Coalson 
> wrote:
You can get a text file containing the label names and keys for a map by using 
-cifti-label-export-table.  It appears that this will have extraneous labels in 
it due to how the file was generated, but you can ignore the key values that 
aren't used.

If you count the number of unique values in one column of the cifti file you 
opened, you will probably get the number you are expecting.  It is simply that 
the values are not 1-7, but an arbitrary set of integers.

The -cifti-parcellate command will automatically use the first map in the label 
file to compute within-label averages for its labels.  It is probably not what 
you want to use to simply examine the label map.  I'm not sure why it ended up 
with 8 rather than 7 parcels, though.  As a side effect, you could use 
"wb_command -cifti-parcel-mapping-to-label" to generate a label file that 
should be much less cluttered, but we may have better advice on this soon...

Tim


On Mon, Jul 17, 2017 at 4:51 PM, David Hartman 
> wrote:

Since calling in MATLAB: 
“ciftiopen('RSN-networks.32k_fs_LR.dlabel.nii','workbench\bin_windows64\wb_command.exe')”
 gives me a matrix with 4 columns whose range is beyond the 17 or 7 
corresponding to their functional grouping, I have resorted to using wb-command 
hoping to get right labelling.



Question: Which wb-command should I use to read the labels and see for the 32k 
mesh which networks the nodes belong to (ie. 1 to 7)?



I tried using wb_command -cifti-parcellate 
rfMRI_REST1_LR_Atlas_MSMAll.dtseries.nii  RSN-networks.32k_fs_LR.dlabel.nii 
COLUMN 

Re: [HCP-Users] mapping HCP data into 7 functional networks (using Thomas Yeo parcellation)

2017-07-18 Thread David Hartman
I noticed that the parcellation of the nodes into 360 ROIs (Glasser 2016)
does not "fit" well into networks (ie, Yeo parcellation). There are nodes
in a given ROI that belong to multiple networks. Are you aware of any work
that has tried to bridge your ROI parcellation with some of the network
parcellations or is this something any of the folks at HCP have explored?

Thank you,
David Hartman

On Mon, Jul 17, 2017 at 7:55 PM, Timothy Coalson  wrote:

> As Matt says, 32k is used to roughly match the acquisition resolution.
> The 7T data at 1.6mm is on a 59k mesh for the same reasons.  If you want to
> resample them, see the -cifti-resample command (or -metric-resample for a
> simpler command that does the same surface operation on a different format).
>
> Side note though, in order to get the data onto 32k or 59k, we do in fact
> first map the volume data to the native freesurfer mesh for the subject
> (~140k if memory serves), in order to use the surfaces that contain the
> original folding detail (even though 32k surfaces still have very good
> folding detail).  After that, we downsample them on the surface to the more
> sensible resolution.
>
> Tim
>
>
> On Mon, Jul 17, 2017 at 6:44 PM, Glasser, Matthew 
> wrote:
>
>> No that would be a massive oversampling of the data.  The data are
>> acquired at 2mm isotropic, which is roughly the vertex spacing of the 32k
>> mesh.  164k files would be much larger for no benefit.  If you want to
>> upsample particular maps to 164k (or downsample something to 32k) that is
>> trivial to do.
>>
>> Peace,
>>
>> Matt.
>>
>> From:  on behalf of David Hartman
>> 
>> Date: Monday, July 17, 2017 at 6:24 PM
>> To: Timothy Coalson 
>> Cc: "hcp-users@humanconnectome.org" 
>> Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks
>> (using Thomas Yeo parcellation)
>>
>> Yes it does seem the first of four columns has 8 distinct numbers.
>>
>> As a side note, is there any released HCP resting state data on a 164k
>> mesh or are they all 32k mesh?
>>
>> Thank you,
>> David Hartman
>>
>> On Mon, Jul 17, 2017 at 6:19 PM, Timothy Coalson  wrote:
>>
>>> You can get a text file containing the label names and keys for a map by
>>> using -cifti-label-export-table.  It appears that this will have extraneous
>>> labels in it due to how the file was generated, but you can ignore the key
>>> values that aren't used.
>>>
>>> If you count the number of unique values in one column of the cifti file
>>> you opened, you will probably get the number you are expecting.  It is
>>> simply that the values are not 1-7, but an arbitrary set of integers.
>>>
>>> The -cifti-parcellate command will automatically use the first map in
>>> the label file to compute within-label averages for its labels.  It is
>>> probably not what you want to use to simply examine the label map.  I'm not
>>> sure why it ended up with 8 rather than 7 parcels, though.  As a side
>>> effect, you could use "wb_command -cifti-parcel-mapping-to-label" to
>>> generate a label file that should be much less cluttered, but we may have
>>> better advice on this soon...
>>>
>>> Tim
>>>
>>>
>>> On Mon, Jul 17, 2017 at 4:51 PM, David Hartman 
>>> wrote:
>>>
 Since calling in MATLAB: “ciftiopen('RSN-networks.32k_fs_LR.dlabel.nii'
 ,'workbench\bin_windows64\wb_command.exe')” gives me a matrix with 4
 columns whose range is beyond the 17 or 7 corresponding to their functional
 grouping, I have resorted to using wb-command hoping to get right
 labelling.



 *Question:* Which wb-command should I use to read the labels and see
 for the 32k mesh which networks the nodes belong to (ie. 1 to 7)?



 I tried using wb_command -cifti-parcellate
 rfMRI_REST1_LR_Atlas_MSMAll.dtseries.nii
 RSN-networks.32k_fs_LR.dlabel.nii COLUMN out.ptseries.nii, but
 out.ptseries.nii returns a matrix 8×1200. But I am hoping for something
 32k×1 for the correct labelling, where each row is a number between 1
 and 7 or 17 corresponding to the group the node belongs to.





 Hope it is not too confusing.



 Thank you,

 David Hartman

 On Mon, Jul 17, 2017 at 3:09 PM, Harms, Michael 
 wrote:

>
> Hi,
> There are actually 4 different maps in that file.  If you load it into
> Workbench, the name associated with each map tells you what each map is.
>
> cheers,
> -MH
>
> --
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.Tel: 314-747-6173 <(314)%20747-6173>
> St. Louis, 

Re: [HCP-Users] diffusion data merge pipeline

2017-07-18 Thread Glasser, Matthew
We really don’t recommend you do that.  I would ask about the eddy_cuda on the 
FSL or neurodebian lists.

Peace,

Matt.

From: Yeun Kim >
Date: Tuesday, July 18, 2017 at 4:11 PM
To: "Harms, Michael" >
Cc: Matt Glasser >, 
"hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] diffusion data merge pipeline

Thanks for the reply.
I would like to reduce the computing time for DiffusionPreprocessing; feeding 
in both sets of dMRIs takes about 24.16 hours to run, while parallel processing 
takes 10.72 hours.
I've been trying to retrieve the eddy_cuda version, but I can't find it in the 
neurodebian packages. Where can I find pre-compiled binaries of eddy_cuda?
The platform I am using in my Docker is Ubuntu 14.04 LTS.

Thank you again,
Yeun

On Mon, Jul 17, 2017 at 11:57 AM, Harms, Michael 
> wrote:

Hi,
Is there a particular reason that you can’t provide all the dMRI scans at once, 
and let the pipeline handle the merging for you?
If you process each dMRI run separately, then the individual runs will not be 
in optimal alignment.  (You would be relying on the registration of each run to 
the T1, rather than registering the dMRI directly to each other as part of 
‘eddy’).

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110Email: mha...@wustl.edu

From: 
>
 on behalf of Yeun Kim >
Date: Monday, July 17, 2017 at 1:32 PM
To: "Glasser, Matthew" >

Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] diffusion data merge pipeline

I am using the following function (and is looped through the pairs of unique 
sets of gradient tables (i.e. loops twice for dir99 and dir98):
${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
  --posData="{posData}" \
  --negData="{negData}"  \
  --path="{path}" \
  --subject="{subject}"  \
  --echospacing="{echospacing}"  \
  --PEdir={PEdir}  \
  --gdcoeffs="NONE"  \
  --dwiname="{dwiname}"  \
  --printcom=""'

Where:
$posData = diffusion data in the positive direction
$negData = diffusion data in the negative direction
$path = output directory path
$echospacing = echospacing
$PEdir = 2
$dwiname = i.e. Diffusion_dir-98_run-01


FYI: I'm using HCPPipelines v3.17.

-

Technical details:

I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a Docker 
container with the following python code. It is looped through the pairs of 
unique sets of gradient tables (i.e. loops twice for dir99 and dir98) and set 
to process in parallel:

dwi_stage_dict = OrderedDict([("DiffusionPreprocessing", 
partial(run_diffusion_processsing,

 posData=pos,

 negData=neg,

 path=args.output_dir,

 subject="sub-%s" % subject_label,

 echospacing=echospacing,

 PEdir=PEdir,

 gdcoeffs="NONE",

 dwiname=dwiname,

 n_cpus=args.n_cpus))])
for stage, stage_func in dwi_stage_dict.iteritems():
if stage in args.stages:
Process(target=stage_func).start()

On Mon, Jul 17, 2017 at 11:15 AM, Glasser, Matthew 
> wrote:
The pipeline is capable of doing the merge for you if you want.  Can you post 
how you called the diffusion pipeline?

Peace,

Matt.

From: Yeun Kim >

Re: [HCP-Users] diffusion data merge pipeline

2017-07-18 Thread Yeun Kim
Thanks for the reply.
I would like to reduce the computing time for DiffusionPreprocessing;
feeding in both sets of dMRIs takes about 24.16 hours to run, while
parallel processing takes 10.72 hours.
I've been trying to retrieve the eddy_cuda version, but I can't find it in
the neurodebian packages. Where can I find pre-compiled binaries of
eddy_cuda?
The platform I am using in my Docker is Ubuntu 14.04 LTS.

Thank you again,
Yeun

On Mon, Jul 17, 2017 at 11:57 AM, Harms, Michael  wrote:

>
> Hi,
> Is there a particular reason that you can’t provide all the dMRI scans at
> once, and let the pipeline handle the merging for you?
> If you process each dMRI run separately, then the individual runs will not
> be in optimal alignment.  (You would be relying on the registration of each
> run to the T1, rather than registering the dMRI directly to each other as
> part of ‘eddy’).
>
> cheers,
> -MH
>
> --
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave. Tel: 314-747-6173 <(314)%20747-6173>
> St. Louis, MO  63110 Email: mha...@wustl.edu
>
> From:  on behalf of Yeun Kim <
> yeun...@gmail.com>
> Date: Monday, July 17, 2017 at 1:32 PM
> To: "Glasser, Matthew" 
>
> Cc: "hcp-users@humanconnectome.org" 
> Subject: Re: [HCP-Users] diffusion data merge pipeline
>
> I am using the following function (and is looped through the pairs of
> unique sets of gradient tables (i.e. loops twice for dir99 and dir98):
> ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh  \
>   --posData="{posData}" \
>   --negData="{negData}"  \
>   --path="{path}" \
>   --subject="{subject}"  \
>   --echospacing="{echospacing}"  \
>   --PEdir={PEdir}  \
>   --gdcoeffs="NONE"  \
>   --dwiname="{dwiname}"  \
>   --printcom=""'
>
> Where:
> $posData = diffusion data in the positive direction
> $negData = diffusion data in the negative direction
> $path = output directory path
> $echospacing = echospacing
> $PEdir = 2
> $dwiname = i.e. Diffusion_dir-98_run-01
>
>
> FYI: I'm using HCPPipelines v3.17.
>
> -
>
> Technical details:
>
> I run ${HCPPIPEDIR}/DiffusionPreprocessing/DiffPreprocPipeline.sh in a
> Docker container with the following python code. It is looped through the
> pairs of unique sets of gradient tables (i.e. loops twice for dir99 and
> dir98) and set to process in parallel:
>
> dwi_stage_dict = OrderedDict([("DiffusionPreprocessing",
> partial(run_diffusion_processsing,
>
>*posData*=pos,
>
>*negData*=neg,
>
>*path*=args.output_dir,
>
>*subject*="sub-%s" % subject_label,
>
>*echospacing*=echospacing,
>
>*PEdir*=PEdir,
>
>*gdcoeffs*="NONE",
>
>*dwiname*=dwiname,
>
>*n_cpus*=args.n_cpus))])
> for stage, stage_func in
> dwi_stage_dict.iteritems():
> if stage in args.stages:
> Process(target=stage_func).start()
>
> On Mon, Jul 17, 2017 at 11:15 AM, Glasser, Matthew 
> wrote:
>
>> The pipeline is capable of doing the merge for you if you want.  Can you
>> post how you called the diffusion pipeline?
>>
>> Peace,
>>
>> Matt.
>>
>> From: Yeun Kim 
>> Date: Monday, July 17, 2017 at 1:12 PM
>> To: Matt Glasser 
>> Cc: "hcp-users@humanconnectome.org" 
>> Subject: Re: [HCP-Users] diffusion data merge pipeline
>>
>> When I run DiffusionPreprocessing, I make the --dwiname=DWIName specific
>> to the diffusion scan (i.e. DWIName= Diffusion_dir-98_run-01) to prevent
>> files from being overwritten.
>> I end up with:
>> ${StudyFolder}/${Subject}/T1w/Diffusion_dir-98_run-01/data.nii.gz
>> ${StudyFolder}/${Subject}/T1w/Diffusion_dir-99_run-01/data.nii.gz
>>
>> I would like to combine the two data.nii.gz files.
>>
>> On Mon, Jul 17, 2017 at 10:58 AM, Glasser, Matthew 
>> wrote:
>>
>>> Look for the ${StudyFolder}/${Subject}/T1w/Diffusion/data.nii.gz file.
>>>
>>> Peace,
>>>
>>> Matt.
>>>
>>> From:  on behalf of Yeun Kim <
>>> yeun...@gmail.com>
>>> Date: Monday, July 17, 2017 at 12:56 PM
>>> To: "hcp-users@humanconnectome.org" 
>>> Subject: [HCP-Users] diffusion data merge pipeline
>>>
>>> Hi,
>>>
>>> We have the diffusion scans:
>>> dMRI_dir98_AP, dMRI_dir98_PA
>>> dMRI_dir99_AP, dMRI_dir99_PA
>>>
>>> in which there is a pair of phase encoding directions (AP,PA) and two
>>> sets of different diffusion weighting 

Re: [HCP-Users] mapping HCP data into 7 functional networks (using Thomas Yeo parcellation)

2017-07-18 Thread Glasser, Matthew
Please use our CIFTI tool for now (option 2B): 
https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ

We are planning to modify the CIFTI tools that were originally written for 
field trip to work better with brain imaging data in the future.

Peace,

Matt.

From: David Hartman >
Date: Tuesday, July 18, 2017 at 12:05 PM
To: Timothy Coalson >
Cc: Matt Glasser >, 
"hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks (using 
Thomas Yeo parcellation)

Thank you for your responses. My data is indeed in a 32k (per hemisphere) mesh. 
My question pertains to what counts as a medial wall? My impression (but please 
correct me if I am wrong) is that  when using the MATLAB command ciftiopen, 
there are 64.9k vertices (32k per hemisphere). and when using the MATLAB 
command ft_read_cifti, there are ~59.5k vertices.  The results from the 
ft_read_cifti produce around 5.5k NANS corresponding to the discrepancy between 
ciftiopen and ft_read_cifti. My impression is that these ~5.5k NANS are the 
medial wall vertices.

On the other hand,  when using the "RSN-networks.32k_fs_LR.dlabel.nii”, there 
are 8 distinct numbers.
 From workbench (cifti-label-export-table) I notice that one of these numbers 
corresponds to FreeSurfer_Defined_Medial_Wall, while the other 7 to  
7Networks_X.

The NANS and the FreeSurferDefined_Medial_Wall vertices match up mostly. 
However, there is a gap. There is only a ~5k overlap between the ~5.5k NANs and 
the vertices corresponding to the group: FreeSurfer_Defined_Medial_Wall.   Any 
ideas why the groups would be slightly different (mismatch of 500 vertices) 
between the NANS and the FreeSurfer_Defined_Medial_Wall?

Thank you,
David Hartman

On Mon, Jul 17, 2017 at 7:55 PM, Timothy Coalson 
> wrote:
As Matt says, 32k is used to roughly match the acquisition resolution.  The 7T 
data at 1.6mm is on a 59k mesh for the same reasons.  If you want to resample 
them, see the -cifti-resample command (or -metric-resample for a simpler 
command that does the same surface operation on a different format).

Side note though, in order to get the data onto 32k or 59k, we do in fact first 
map the volume data to the native freesurfer mesh for the subject (~140k if 
memory serves), in order to use the surfaces that contain the original folding 
detail (even though 32k surfaces still have very good folding detail).  After 
that, we downsample them on the surface to the more sensible resolution.

Tim


On Mon, Jul 17, 2017 at 6:44 PM, Glasser, Matthew 
> wrote:
No that would be a massive oversampling of the data.  The data are acquired at 
2mm isotropic, which is roughly the vertex spacing of the 32k mesh.  164k files 
would be much larger for no benefit.  If you want to upsample particular maps 
to 164k (or downsample something to 32k) that is trivial to do.

Peace,

Matt.

From: 
>
 on behalf of David Hartman 
>
Date: Monday, July 17, 2017 at 6:24 PM
To: Timothy Coalson >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks (using 
Thomas Yeo parcellation)

Yes it does seem the first of four columns has 8 distinct numbers.

As a side note, is there any released HCP resting state data on a 164k mesh or 
are they all 32k mesh?

Thank you,
David Hartman

On Mon, Jul 17, 2017 at 6:19 PM, Timothy Coalson 
> wrote:
You can get a text file containing the label names and keys for a map by using 
-cifti-label-export-table.  It appears that this will have extraneous labels in 
it due to how the file was generated, but you can ignore the key values that 
aren't used.

If you count the number of unique values in one column of the cifti file you 
opened, you will probably get the number you are expecting.  It is simply that 
the values are not 1-7, but an arbitrary set of integers.

The -cifti-parcellate command will automatically use the first map in the label 
file to compute within-label averages for its labels.  It is probably not what 
you want to use to simply examine the label map.  I'm not sure why it ended up 
with 8 rather than 7 parcels, though.  As a side effect, you could use 
"wb_command -cifti-parcel-mapping-to-label" to generate a label file that 
should be much less cluttered, but we may have better advice on this soon...

Tim


On Mon, Jul 17, 2017 at 

Re: [HCP-Users] mapping HCP data into 7 functional networks (using Thomas Yeo parcellation)

2017-07-18 Thread David Hartman
Thank you for your responses. My data is indeed in a 32k (per hemisphere)
mesh. My question pertains to what counts as a medial wall? My impression
(but please correct me if I am wrong) is that  when using the MATLAB
command ciftiopen, there are 64.9k vertices (32k per hemisphere). and when
using the MATLAB command ft_read_cifti, there are ~59.5k vertices.  The
results from the ft_read_cifti produce around 5.5k NANS corresponding to
the discrepancy between ciftiopen and ft_read_cifti. My impression is that
these ~5.5k NANS are the medial wall vertices.

On the other hand,  when using the "RSN-networks.32k_fs_LR.dlabel.nii”,
there are 8 distinct numbers.
 From workbench (cifti-label-export-table) I notice that one of these
numbers corresponds to FreeSurfer_Defined_Medial_Wall, while the other 7 to
 7Networks_X.

The NANS and the FreeSurferDefined_Medial_Wall vertices match up mostly.
However, there is a gap. There is only a ~5k overlap between the ~5.5k NANs
and the vertices corresponding to the group: FreeSurfer_Defined_Medial_Wall.
Any ideas why the groups would be slightly different (mismatch of 500
vertices) between the NANS and the FreeSurfer_Defined_Medial_Wall?

Thank you,
David Hartman

On Mon, Jul 17, 2017 at 7:55 PM, Timothy Coalson  wrote:

> As Matt says, 32k is used to roughly match the acquisition resolution.
> The 7T data at 1.6mm is on a 59k mesh for the same reasons.  If you want to
> resample them, see the -cifti-resample command (or -metric-resample for a
> simpler command that does the same surface operation on a different format).
>
> Side note though, in order to get the data onto 32k or 59k, we do in fact
> first map the volume data to the native freesurfer mesh for the subject
> (~140k if memory serves), in order to use the surfaces that contain the
> original folding detail (even though 32k surfaces still have very good
> folding detail).  After that, we downsample them on the surface to the more
> sensible resolution.
>
> Tim
>
>
> On Mon, Jul 17, 2017 at 6:44 PM, Glasser, Matthew 
> wrote:
>
>> No that would be a massive oversampling of the data.  The data are
>> acquired at 2mm isotropic, which is roughly the vertex spacing of the 32k
>> mesh.  164k files would be much larger for no benefit.  If you want to
>> upsample particular maps to 164k (or downsample something to 32k) that is
>> trivial to do.
>>
>> Peace,
>>
>> Matt.
>>
>> From:  on behalf of David Hartman
>> 
>> Date: Monday, July 17, 2017 at 6:24 PM
>> To: Timothy Coalson 
>> Cc: "hcp-users@humanconnectome.org" 
>> Subject: Re: [HCP-Users] mapping HCP data into 7 functional networks
>> (using Thomas Yeo parcellation)
>>
>> Yes it does seem the first of four columns has 8 distinct numbers.
>>
>> As a side note, is there any released HCP resting state data on a 164k
>> mesh or are they all 32k mesh?
>>
>> Thank you,
>> David Hartman
>>
>> On Mon, Jul 17, 2017 at 6:19 PM, Timothy Coalson  wrote:
>>
>>> You can get a text file containing the label names and keys for a map by
>>> using -cifti-label-export-table.  It appears that this will have extraneous
>>> labels in it due to how the file was generated, but you can ignore the key
>>> values that aren't used.
>>>
>>> If you count the number of unique values in one column of the cifti file
>>> you opened, you will probably get the number you are expecting.  It is
>>> simply that the values are not 1-7, but an arbitrary set of integers.
>>>
>>> The -cifti-parcellate command will automatically use the first map in
>>> the label file to compute within-label averages for its labels.  It is
>>> probably not what you want to use to simply examine the label map.  I'm not
>>> sure why it ended up with 8 rather than 7 parcels, though.  As a side
>>> effect, you could use "wb_command -cifti-parcel-mapping-to-label" to
>>> generate a label file that should be much less cluttered, but we may have
>>> better advice on this soon...
>>>
>>> Tim
>>>
>>>
>>> On Mon, Jul 17, 2017 at 4:51 PM, David Hartman 
>>> wrote:
>>>
 Since calling in MATLAB: “ciftiopen('RSN-networks.32k_fs_LR.dlabel.nii'
 ,'workbench\bin_windows64\wb_command.exe')” gives me a matrix with 4
 columns whose range is beyond the 17 or 7 corresponding to their functional
 grouping, I have resorted to using wb-command hoping to get right
 labelling.



 *Question:* Which wb-command should I use to read the labels and see
 for the 32k mesh which networks the nodes belong to (ie. 1 to 7)?



 I tried using wb_command -cifti-parcellate
 rfMRI_REST1_LR_Atlas_MSMAll.dtseries.nii
 RSN-networks.32k_fs_LR.dlabel.nii COLUMN out.ptseries.nii, but
 out.ptseries.nii returns a matrix 8×1200. But I am hoping for something
 32k×1 for the correct labelling, where each row is a number 

Re: [HCP-Users] CMRR vs MGH multiband/SMS sequences

2017-07-18 Thread Glasser, Matthew
What version of the software is this and what scanner?  Also, what are the 
sequence parameters?

Peace,

Matt.

From: 
>
 on behalf of A R >
Date: Tuesday, July 18, 2017 at 6:25 AM
To: "Harms, Michael" >
Cc: "HCP-Users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] CMRR vs MGH multiband/SMS sequences

Hi,

It's best seen on tStd and tSnr images. Depending on sequence parameters it can 
very subtle. I've tried different coils and sites. These are with CMRR. Just a 
few (20 I think) volumes demonstrating the artifact.

https://ibb.co/cPAwEa
https://ibb.co/cmu6Ea


On Jul 14, 2017, at 6:08 PM, Harms, Michael 
> wrote:


What banding artifact are you referring to?  Could you post a picture to a 
sharing site?

thx
--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
>
 on behalf of A R >
Date: Friday, July 14, 2017 at 8:25 AM
To: "Juranek, Jenifer" 
>
Cc: "HCP-Users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] CMRR vs MGH multiband/SMS sequences

In my experience they both suffer from the same banding artifact affecting the 
middle 25% of slices.


On Jul 11, 2017, at 5:34 PM, Juranek, Jenifer 
> wrote:

Just curious if anyone is aware of head-to-head comparisons of CMRR and MGH 
MB/SMS sequences?
Someone recently mentioned to me that “the general consensus is that MGH 
outperforms CMRR”.
Is there a “general consensus” in the research community on this issue? Any 
differences between dMRI and fMRI applications?
I’m interested in using an HCP-style acquisition protocol for a 5yr study about 
to start…from what I can tell, CMRR MB sequences have been selected across the 
board for HCP-style studies currently funded by NIH.
Does anyone have any thoughts they can share?
Many Thanks,
Jenifer
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
Jenifer Juranek, PhD
Associate Professor
Department of Pediatrics
UTHealth
Houston, TX 77030
713.500.8233


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] CMRR vs MGH multiband/SMS sequences

2017-07-18 Thread A R
Hi,

It's best seen on tStd and tSnr images. Depending on sequence parameters it can 
very subtle. I've tried different coils and sites. These are with CMRR. Just a 
few (20 I think) volumes demonstrating the artifact. 

https://ibb.co/cPAwEa
https://ibb.co/cmu6Ea


> On Jul 14, 2017, at 6:08 PM, Harms, Michael  wrote:
> 
> 
> What banding artifact are you referring to?  Could you post a picture to a 
> sharing site?
> 
> thx
> -- 
> Michael Harms, Ph.D.
> ---
> Conte Center for the Neuroscience of Mental Disorders
> Washington University School of Medicine
> Department of Psychiatry, Box 8134
> 660 South Euclid Ave.  Tel: 314-747-6173
> St. Louis, MO  63110  Email: mha...@wustl.edu
> 
> From:  on behalf of A R 
> 
> Date: Friday, July 14, 2017 at 8:25 AM
> To: "Juranek, Jenifer" 
> Cc: "HCP-Users@humanconnectome.org" 
> Subject: Re: [HCP-Users] CMRR vs MGH multiband/SMS sequences
> 
> In my experience they both suffer from the same banding artifact affecting 
> the middle 25% of slices. 
> 
> 
> On Jul 11, 2017, at 5:34 PM, Juranek, Jenifer  
> wrote:
> 
>> Just curious if anyone is aware of head-to-head comparisons of CMRR and MGH 
>> MB/SMS sequences?
>> Someone recently mentioned to me that “the general consensus is that MGH 
>> outperforms CMRR”.
>> Is there a “general consensus” in the research community on this issue? Any 
>> differences between dMRI and fMRI applications?
>> I’m interested in using an HCP-style acquisition protocol for a 5yr study 
>> about to start…from what I can tell, CMRR MB sequences have been selected 
>> across the board for HCP-style studies currently funded by NIH.
>> Does anyone have any thoughts they can share?
>> Many Thanks,
>> Jenifer
>> ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
>> Jenifer Juranek, PhD
>> Associate Professor
>> Department of Pediatrics
>> UTHealth
>> Houston, TX 77030
>> 713.500.8233
>>  
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>> 
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
> 
>  
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> not the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Statistical comparison of whole brain (surface "voxels" + subcortical / cerebellar voxels) connectivity between two explicitly defined voxels

2017-07-18 Thread Harms, Michael

Hi,
A couple extensions to Tim’s recipe.

PALM has a “transposedata” option, so you can always transpose at the PALM 
stage if you prefer to not explicitly create a transposed CIFTI file.

PALM can indeed accept CIFTI files “as is”, *if* you want to do permutation on 
the max statistic across grayordinates.  If you want to do TFCE, you currently 
do indeed need to separate the CIFTI first.  We have an example of that 
particular approach in the Task fMRI practical of the HCP Course.  (Jenn can 
hopefully provide an update on when we anticipate being able to get the 
materials from the latest course online).

Similar to Tim, I’m not exactly sure what you’re intending to permute.  Are you 
going to compute a dense connectome *for each subject*?  Then you could take 
the difference between the “A” and “B” maps for each subject, and use those as 
input to PALM to test whether that difference is consistently different from 0 
across subjects (using sign-flippings).  As Tim suggests, it would likely be 
much easier to compute a parcellated connectome for each subject instead.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
>
 on behalf of Timothy Coalson >
Date: Monday, July 17, 2017 at 7:52 PM
To: "Regner, Michael" 
>
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] Statistical comparison of whole brain (surface 
"voxels" + subcortical / cerebellar voxels) connectivity between two explicitly 
defined voxels

On Mon, Jul 17, 2017 at 6:21 PM, Regner, Michael 
> wrote:
Hello Matt and HCP Community,

Thank you for the helpful e-mail and words of encouragement. We are very 
encouraged by the data / results we can view in the Connectome Workbench; 
however, statistical analysis has proved challenging.

Just to reiterate what we are attempting to do:  we are hoping to compare 
whole-brain (all ~91k brainordinates) connectivity between two predefined 
brainordinates to determine the areas of the brain in which there is a 
statistically significant difference in connectivity.  We intend to eventually 
extend this analysis to compare two different ROIs (sets or masks of 
brainordinates); however, for simplicity’s sake (and to prove to our PI that we 
can do this) we would like to perform this comparison first between two voxels 
/ vertices (henceforth brainordinates “A” and “B”).

The general pipeline (using the 33 GB resting state dense connectome CIFTI file 
as our starting point) is as follows step-by-step:

1.Reduce the resting state dense connectivity CIFT file (91k x 91k) into 
two separate CIFTI files, which are NOT dense.

They would in fact still be dense, just on one dimension of the matrix, rather 
than both.  Read this if you haven't yet (and let me know if it doesn't explain 
it well enough):

http://www.humanconnectome.org/software/workbench-command/-cifti-help


  These CIFTI files (“A” and “B”) would contain the whole-brain connectivity 
data from two a priori specified brainordinates (“A” and “B”).  By size, these 
intuitively should be around 91k in number of brainordinates.  We are not 
exactly sure how to best achieve this in wb_command… would “CIFTI-parcellate” 
do the trick?

Honestly, you might as well skip straight to using ROIs, especially if they are 
from an existing parcellation, as it may actually be easier (and more closely 
relates to the commands you would need to use).  When you want to compare 
connectivity of two ROIs, it is highly advisable to get the ROI's average 
timeseries first, and then do a fresh correlation to the timeseries - this 
removes a large amount of noise-based variance from the denominator (among 
other things).

So, if you are in fact using an existing parcellation in cifti dlabel format, 
you would want to do -cifti-parcellate on a dtseries file, and then you can do 
-cifti-cross-correlation with the dtseries file and this new parcellated file 
(ptseries).  If you want to view the per-parcel dense maps in workbench, you 
should have the ptseries file as the first input, and name the output ending in 
".pdconn.nii".  To arrange them similarly to a dscalar file, which is probably 
what PALM expects, you can use -cifti-transpose to turn it into a .dpconn.nii 
file.

If you want to use arbitrary ROI files (which can overlap), then you instead 
need to use -cifti-average-roi-correlation on the dtseries file.  Note that 
this also