Re: [HCP-Users] Volumetric subcortical group-averaged data: what is the exact MNI template you used?

2019-04-06 Thread Xavier Guell Paradis
This means that the template is asymmetric, not symmetric, correct?
Thanks,
Xavier.

On Fri, Apr 5, 2019 at 5:48 PM Glasser, Matthew  wrote:

> FSL’s MNI152.
>
> Matt.
>
> From:  on behalf of Xavier Guell
> Paradis 
> Date: Friday, April 5, 2019 at 4:46 PM
> To: "hcp-users@humanconnectome.org" 
> Subject: [HCP-Users] Volumetric subcortical group-averaged data: what is
> the exact MNI template you used?
>
> Dear HCP experts,
> I am interested in analyzing your group-averaged subcortical volumetric
> data. My understanding is that your volumetric data is registered to MNI
> space. I was wondering if you could let me know what specific MNI template
> you used. I am especially interested in knowing whether it is a symmetric
> or an asymmetric MNI template.
> Thank you,
> Xavier.
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> --
>
> The materials in this message are private and may contain Protected
> Healthcare Information or other information of a sensitive nature. If you
> are not the intended recipient, be advised that any unauthorized use,
> disclosure, copying or the taking of any action in reliance on the contents
> of this information is strictly prohibited. If you have received this email
> in error, please immediately notify the sender via telephone or return mail.
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Volumetric subcortical group-averaged data: what is the exact MNI template you used?

2019-04-05 Thread Xavier Guell Paradis
Dear HCP experts,
I am interested in analyzing your group-averaged subcortical volumetric data. 
My understanding is that your volumetric data is registered to MNI space. I was 
wondering if you could let me know what specific MNI template you used. I am 
especially interested in knowing whether it is a symmetric or an asymmetric MNI 
template.
Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] AWS sync error

2018-08-09 Thread Xavier Guell Paradis
Hello HCP,
We are a team of researchers designing some deep learning tools we have created 
for modality conversions, and wanting to test our library on the HCP dataset 
(structural T1 and T2). We are having issues copying from the s3 bucket. 
Currently after creating our credentials, running aws configure, and attempting 
to sync via:

$aws configure
[entering access credentials]
$aws s3 sync s3://hcp-openaccess-temp .
Ïnvalid Access Key ID¨AWS key does not exist in the records.

We appreciate any recommendations on how to proceed.

Thank you,
Patrick, Xavier, TJ, Anita, Shreyas, Saige.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Is it possible to download HCP data using ascp (the command line version of aspera)?

2018-06-14 Thread Xavier Guell Paradis
Dear HCP experts,
Is it possible to download HCP data using ascp (the command line version of 
aspera)?
If so, is there a list of [[user@]host:]PATH for each download package of HCP 
data?
(We would like to download the 100 unrelated resting state compact package to 
an online cluster, and the aspera GUI does not seem to work after installing 
aspera's .sh file in the cluster).
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Is surface myelin map the same as the volume T1w/T2 map?

2018-03-15 Thread Xavier Guell Paradis
Dear Matt,
Thank you for your reply. Is there a method you would recommend to perform a 
similar correction on volume?
Thank you,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Thursday, March 15, 2018 7:29 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Is surface myelin map the same as the volume T1w/T2 
map?

We don’t have volume-based versions of that because the _BC correction is done 
on the surface maps.  What you list below is not correct.

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, March 15, 2018 at 6:16 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Is surface myelin map the same as the volume T1w/T2 
map?

Thank you for your reply.
I would like to obtain a bias field corrected version of 
S1200_AverageT1wDividedByT2w.nii.
After reading the Minimal Preprocessing Pipelines 2013 paper, I thought I could 
average all the individual estimated residual field bias files, and subtract 
that from S1200_AverageT1wDividedByT2w.nii.

1 - Is this approach correct?
2 - If so, are these the files from each individual I should average: 
/$subject_ID/MNINonLinear/BiasField.nii.gz?
3 - Alternatively, does a S1200 average estimated residual field bias file 
already exist?
4 - If this method is not correct, is there a different method you would 
recommend?

Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Monday, March 05, 2018 7:11 PM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Is surface myelin map the same as the volume T1w/T2 
map?

No that would be without _BC and I don’t know what the 2mmResample is.

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, March 5, 2018 at 4:57 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Is surface myelin map the same as the volume T1w/T2 map?

Dear HCP experts,
Is S1200.All.MyelinMap_BC_MSMAll.32k_fs_LR.dscalar.nii calculated in the same 
way as the S1200_AverageT1wDividedByT2w_2mmResample.nii, with the only 
difference that one file is surface and the other is volume?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Is surface myelin map the same as the volume T1w/T2 map?

2018-03-15 Thread Xavier Guell Paradis
Thank you for your reply.
I would like to obtain a bias field corrected version of 
S1200_AverageT1wDividedByT2w.nii.
After reading the Minimal Preprocessing Pipelines 2013 paper, I thought I could 
average all the individual estimated residual field bias files, and subtract 
that from S1200_AverageT1wDividedByT2w.nii.

1 - Is this approach correct?
2 - If so, are these the files from each individual I should average: 
/$subject_ID/MNINonLinear/BiasField.nii.gz?
3 - Alternatively, does a S1200 average estimated residual field bias file 
already exist?
4 - If this method is not correct, is there a different method you would 
recommend?

Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Monday, March 05, 2018 7:11 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Is surface myelin map the same as the volume T1w/T2 
map?

No that would be without _BC and I don’t know what the 2mmResample is.

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, March 5, 2018 at 4:57 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Is surface myelin map the same as the volume T1w/T2 map?

Dear HCP experts,
Is S1200.All.MyelinMap_BC_MSMAll.32k_fs_LR.dscalar.nii calculated in the same 
way as the S1200_AverageT1wDividedByT2w_2mmResample.nii, with the only 
difference that one file is surface and the other is volume?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Downloading with Aspera: "Error: No such file or directory (Code: 4)"

2018-03-15 Thread Xavier Guell Paradis
Dear HCP experts,
When trying to download the "structural preprocessed" data from a single 
subject I get the following error in Aspera: "Error: No such file or directory 
(Code: 4)".
I have made sure that the folder I am downloading the data to exists.
How could I solve this?

Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Is surface myelin map the same as the volume T1w/T2 map?

2018-03-05 Thread Xavier Guell Paradis
Dear HCP experts,
Is S1200.All.MyelinMap_BC_MSMAll.32k_fs_LR.dscalar.nii calculated in the same 
way as the S1200_AverageT1wDividedByT2w_2mmResample.nii, with the only 
difference that one file is surface and the other is volume?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Restrict dscalar to only CORTEX_LEFT+CORTEX_RIGHT

2018-02-08 Thread Xavier Guell Paradis
Dear HCP experts,
I have a dscalar file (myfile.dscalar.nii), and I would like to restrict this 
file so that it only contains data for CORTEX_LEFT and CORTEX_RIGHT.
If I do this right, CORTEX_LEFT and CORTEX_RIGHT should be a total number of 
59412 data points.

I have tried this, which has not worked:
-cifti-separate myfile.dscalar.nii COLUMN -metric CORTEX_LEFT 
myfile_onlycortexleft.func.gii
-cifti-separate myfile.dscalar.nii COLUMN -metric CORTEX_RIGHT 
myfile_onlycortexright.func.gii
-cifti-create-dense-scalar myfile_leftandrightcortex.dscalar.nii -left-metric 
myfile_onlycortexleft.func.gii -right-metric myfile_onlycortexright.func.gii

However, when I open the file myfile_leftandrightcortex.dscalar.nii with 
Python, it has 64984 data points instead of 59412 (it should have 59412 if it 
truly only has data points for CORTEX_LEFT and CORTEX_RIGHT).

Is there another way to do this?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Convert dlabel to dscalar

2018-01-29 Thread Xavier Guell Paradis
Dear HCP experts,
I have a dlabel file that labels a particular nucleus of the left thalamus 
(thalamusnucleus.dlabel.nii). I would like to convert this dlabel file to a 
whole-brain dscalar file, so that this nucleus in the left thalamus has a value 
of 1 and the rest of the brain has a value of 0.

I have tried the following 4 things which have not worked:
1) wb_command -cifti-create-dense-from-template template.dscalar.nii 
thalamusnucleus.dscalar.nii -label THALAMUS_LEFT thalamusnucleus.dlabel.nii
"template.dscalar.nii" is a random dscalar file with 1 map that includes data 
for the whole brain.
I get this error:
ERROR: Parse error while reading: error occurred while parsing element, line 
number: 1 column number: 1
File: thalamusnucleus.dlabel.nii
2) The option -volume (instead of -label) gives the error "volume file 
'thalamusnucleus.dlabel.nii' does not match volume space of template cifti file"
3) The option -cifti (instead of -label) gives the error "cifti file 
'thalamusnucleus.dlabel.nii' uses a different volume space than the template 
file".
4) wb_command -cifti-label-to-roi thalamusnucleus.dlabel.nii 
thalamusnucleus.dscalar.nii -key 1
This generates a dscalar file, but when I open this file with nibabel in python 
I only see 111 data points (which is not the whole brain; note that what I 
would like to obtain is a dscalar with "1" values in my left thalamus nucleus 
and "0" values in the rest of the brain).

Is there a better way to do this?

Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Create a dtseries file with only cerebral cortical surface data

2018-01-18 Thread Xavier Guell Paradis
Dear HCP experts,
Is it possible to create a dtseries file that contains only cerebral cortical 
surface data? I only need this file in order to use it as a template for a 
python script, so the file could be any existing dtseries file restricted to 
contain only cerebral cortical surface data.

I have been using a random cope1.dtseries.nii file, and tried to eliminate all 
data not corresponding to cerebral cortical surface data. I have been exploring 
several workbench commands, but cannot figure it out.

Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] -cifti-separate converts from dscalar to nifti: is it possible to generate one nifti from each dscalar map?

2017-12-12 Thread Xavier Guell Paradis
It works, thank you!
The steps have been -cifti-separate to convert from dscalar (multiple maps) to 
nifti (multiple maps), and -volume-merge to divide the nifti file in separate 
maps.
Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Tuesday, December 12, 2017 7:37 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] -cifti-separate converts from dscalar to nifti: is it 
possible to generate one nifti from each dscalar map?

wb_command -volume-merge allows you to select specific maps despite its name.  
It contains functionality similar to both fslmerge and bfslroi.

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Tuesday, December 12, 2017 at 6:35 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] -cifti-separate converts from dscalar to nifti: is it 
possible to generate one nifti from each dscalar map?

Dear HCP experts,
I have a dscalar file with 4 maps and would like to convert each map into one 
separate nifti file. I am not worried about the problem of surface information 
in a nifti file, because I am interested in subcortical data only.
I have been able to transform the dscalar into a nifti file using this:
wb_command -cifti-separate input.dscalar.nii COLUMN -volume-all output.nii
The output.nii is a nifti file which contains 4 maps if I open it with wb_view. 
Is there a way to generate 4 separate nifti files (one for each map) instead of 
one single nifti file with all 4 maps?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] -cifti-separate converts from dscalar to nifti: is it possible to generate one nifti from each dscalar map?

2017-12-12 Thread Xavier Guell Paradis
Dear HCP experts,
I have a dscalar file with 4 maps and would like to convert each map into one 
separate nifti file. I am not worried about the problem of surface information 
in a nifti file, because I am interested in subcortical data only.
I have been able to transform the dscalar into a nifti file using this:
wb_command -cifti-separate input.dscalar.nii COLUMN -volume-all output.nii
The output.nii is a nifti file which contains 4 maps if I open it with wb_view. 
Is there a way to generate 4 separate nifti files (one for each map) instead of 
one single nifti file with all 4 maps?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Show outline of dscalar map at a given threshold?

2017-11-28 Thread Xavier Guell Paradis
Thank you very much, it works!
Xavier.

From: Timothy Coalson [tsc...@mst.edu]
Sent: Monday, November 27, 2017 7:39 PM
To: Xavier Guell Paradis
Cc: Glasser, Matthew; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Show outline of dscalar map at a given threshold?

First, use -cifti-math to do the thresholding you want (if you have negative 
p-values for deactivations, you could add something like " + 2 * ((x < 0) && (x 
> -0.05))"):

$ wb_command -cifti-math '(x > 0) && (x < 0.05)' above_thresh.dscalar.nii -var 
x 

Then, import it as a label file (this basic version will give a random color 
and the name "LABEL_1", read the command help for how to assign colors and 
names):

$ wb_command -cifti-label-import above_thresh.dscalar.nii "" 
above_thresh.dlabel.nii

Then when displaying this file on top of something (we recommend a beta map), 
you can click the wrench for the dlabel layer, go to the "Labels" tab, and 
change the drawing type from "filled" to "outline color" or "outline label 
color".

Tim


On Mon, Nov 27, 2017 at 6:24 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Thank you very much for your reply.
I have a map of surface cerebral cortical p values in a dscalar file, and would 
like to create a dlabel file corresponding to the values between 0 and 0.05 so 
that I can display this as an outline.
I have not been successful when trying to create this dlabel file from my 
dscalar file using wb_command. Is there a succession of commands that you would 
recommend to achieve this?

Thank you very much for your help,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Monday, November 27, 2017 4:01 PM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Show outline of dscalar map at a given threshold?

You can create a .dlabel.nii of the above threshold vertices and then display 
this as an outline.  There isn’t a way of doing this directly on a .dscalar.nii.

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, November 27, 2017 at 2:29 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Show outline of dscalar map at a given threshold?

Dear HCP experts,
I was wondering if there is a way to show the outline of a dscalar map at a 
given threshold in workbench view. For example, I may have dscalar file with z 
values for the cerebral cortical surface, and may want to show only the outline 
of clusters corresponding to z>4 in that dscalar file.
I have been exploring the options in workbench view and the mail archive and 
could not find a way to do this. I thought of playing with the thresholds (e.g. 
low threshold=4, high threshold=4.5, show data inside thresholds), but of 
course this only generates an ugly, uneven outline of the z>4 clusters.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Show outline of dscalar map at a given threshold?

2017-11-27 Thread Xavier Guell Paradis
Thank you very much for your reply.
I have a map of surface cerebral cortical p values in a dscalar file, and would 
like to create a dlabel file corresponding to the values between 0 and 0.05 so 
that I can display this as an outline.
I have not been successful when trying to create this dlabel file from my 
dscalar file using wb_command. Is there a succession of commands that you would 
recommend to achieve this?

Thank you very much for your help,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Monday, November 27, 2017 4:01 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Show outline of dscalar map at a given threshold?

You can create a .dlabel.nii of the above threshold vertices and then display 
this as an outline.  There isn’t a way of doing this directly on a .dscalar.nii.

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, November 27, 2017 at 2:29 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Show outline of dscalar map at a given threshold?

Dear HCP experts,
I was wondering if there is a way to show the outline of a dscalar map at a 
given threshold in workbench view. For example, I may have dscalar file with z 
values for the cerebral cortical surface, and may want to show only the outline 
of clusters corresponding to z>4 in that dscalar file.
I have been exploring the options in workbench view and the mail archive and 
could not find a way to do this. I thought of playing with the thresholds (e.g. 
low threshold=4, high threshold=4.5, show data inside thresholds), but of 
course this only generates an ugly, uneven outline of the z>4 clusters.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Show outline of dscalar map at a given threshold?

2017-11-27 Thread Xavier Guell Paradis
Dear HCP experts,
I was wondering if there is a way to show the outline of a dscalar map at a 
given threshold in workbench view. For example, I may have dscalar file with z 
values for the cerebral cortical surface, and may want to show only the outline 
of clusters corresponding to z>4 in that dscalar file.
I have been exploring the options in workbench view and the mail archive and 
could not find a way to do this. I thought of playing with the thresholds (e.g. 
low threshold=4, high threshold=4.5, show data inside thresholds), but of 
course this only generates an ugly, uneven outline of the z>4 clusters.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Subcortical atlas of S900_AverageT1w_restore.nii.gz?

2017-11-22 Thread Xavier Guell Paradis
Dear Michael and Matt,
Thank you very much for your reply.
As for Michael's question - yes, I am looking for a dlabel file that includes 
volumes such as caudate and putamen as separate structures. Perhaps there is a 
workbench command that would allow to isolate these structures in a random 
MNINonLinear dscalar file?
I have been exploring -cifti-restrict-dense-map and -cifti-create-dense-scalar 
but have not figured it out. I have also tried using python's nibabel and numpy 
tools to isolate the rows corresponding to each structure, but this seems to be 
a much more complicated route.

Thank you very much for your help,
Xavier.


From: Harms, Michael [mha...@wustl.edu]
Sent: Tuesday, November 21, 2017 7:29 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Subcortical atlas of S900_AverageT1w_restore.nii.gz?


Hi,

The subcortical structures are already defined in the standard CIFTI space.  
Are you just looking for a dlabel file that includes those subcortical labels?

Cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave.Tel: 314-747-6173
St. Louis, MO  63110  Email: 
mha...@wustl.edu

From:  on behalf of Xavier Guell Paradis 

Date: Tuesday, November 21, 2017 at 5:23 PM
To: "hcp-users@humanconnectome.org" 
Subject: [HCP-Users] Subcortical atlas of S900_AverageT1w_restore.nii.gz?

Dear HCP experts,
I was wondering if there is a publicly available, basic subcortical 
parcellation (deliniating structures such as caudate and putamen) of the 
standard volume structural file of HCP (such as S900_AverageT1w_restore.nii.gz).
Restated, is there a parcellation that deliniates structures such as caudate 
and putamen in S900_AverageT1w_restore.nii.gz?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Subcortical atlas of S900_AverageT1w_restore.nii.gz?

2017-11-21 Thread Xavier Guell Paradis
Dear HCP experts,
I was wondering if there is a publicly available, basic subcortical 
parcellation (deliniating structures such as caudate and putamen) of the 
standard volume structural file of HCP (such as S900_AverageT1w_restore.nii.gz).
Restated, is there a parcellation that deliniates structures such as caudate 
and putamen in S900_AverageT1w_restore.nii.gz?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] How to transform all subcortical values to 0 in a dscalar?

2017-10-09 Thread Xavier Guell Paradis
Dear Tim,
Thank you for your reply. I wanted the subcortical data to be 0 so that it 
would not interfere with some math operations I wanted to perform using 
cortical data only, but I agree with your idea that this is not the optimal 
approach (plus the 0's would still affect some of the math operations). In the 
end I imported the dscalar file with python as a matrix, excluded all the rows 
that did not belong to cortical vertices, and then did the math operations.
Thank you very much,
Xavier.

From: Timothy Coalson [tsc...@mst.edu]
Sent: Wednesday, October 04, 2017 4:09 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] How to transform all subcortical values to 0 in a 
dscalar?

You can use -cifti-replace-structure with the -volume-all option and a volume 
file of zeroes.  If you have a lot of maps (or you want to do something similar 
to a long dtseries), you can make a cifti file with 1's in the surfaces and 0's 
in voxels, and use -cifti-math to multiply them together.  For a small dscalar 
file, I would do it like this:

wb_command -cifti-separate  COLUMN -volume-all volspacetemp.nii.gz 
-crop
wb_command -volume-math '0' zerovol.nii.gz -var x volspacetemp.nii.gz
cp  data_subcort_zeroed.dscalar.nii
wb_command -cifti-replace-structure data_subcort_zeroed.dscalar.nii COLUMN 
-volume-all zerovol.nii.gz -from-cropped

I am curious, why do you want the subcortical data to be zero?  It won't make 
the file smaller, and processing commands will still think there is data there, 
it won't interact with the surface data regardless of whether it is zero, and 
you can choose not to display the subcortical stuff anyway...

Tim


On Wed, Oct 4, 2017 at 10:07 AM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear HCP experts,
I have a dscalar file with one map of cortical and subcortical data. I would 
like to transform all subcortical values to 0, and leave the cortical surface 
values intact. I have explored the -cifti-stats and -volume-stats options as 
well as -cifti-create-dense-scalar but cannot figure out a way to do this.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] How to transform all subcortical values to 0 in a dscalar?

2017-10-04 Thread Xavier Guell Paradis
Dear HCP experts,
I have a dscalar file with one map of cortical and subcortical data. I would 
like to transform all subcortical values to 0, and leave the cortical surface 
values intact. I have explored the -cifti-stats and -volume-stats options as 
well as -cifti-create-dense-scalar but cannot figure out a way to do this.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Resampling freesurfer-HCP

2017-09-12 Thread Xavier Guell Paradis
I thought it might be useful to add that the initial .w file is thresholded and 
does not contain values for all cerebral cortical regions.
Thank you very much,
Xavier.

From: hcp-users-boun...@humanconnectome.org 
[hcp-users-boun...@humanconnectome.org] on behalf of Xavier Guell Paradis 
[xavie...@mit.edu]
Sent: Tuesday, September 12, 2017 2:05 PM
To: hcp-users@humanconnectome.org
Subject: [HCP-Users] Resampling freesurfer-HCP

Dear HCP experts,
I have an overlay freesurfer file (format is .w) which corresponds to a task 
activity surface map (group result). I have one .w file for each cerebral 
hemisphere. I would like to visualize these maps using wb_view, and have tried 
to follow the instructions you published 
(https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ#HCPUsersFAQ-9.HowdoImapdatabetweenFreeSurferandHCP?).
As a first step I would need to convert the .w files into .func.gii files using 
mris_convert. Then I would use wb_command -metric-resample, as indicated in 
your instructions. This does not seem to work with my .w file:

1) mris_convert myfile.w myfile.func.gii
This generates myfile.func.gii, but when I use wb_command -metric-resample I 
get the following error:
ERROR: Parse error while reading: error occurred while parsing element, line 
number: 1 column number: 1

2) As an alternative approach, I opened myfile.w using Tksurfer and saved the 
overlay (myfile.w) with .mgh format (generating a new file: "myfile.mgh"). Then 
I do the following:
mris_convert myfile.mgh myfile.func.gii
This generates the myfile.func.gii file, but when I use wb_command 
-metric-resample with this file I get a different error:
ERROR: All data arrays (columns) in the file must have the same number of rows. 
 The first array (column) contains 163842 rows.  Array 2 contains 327680 rows.

Thank you very much for your help,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Resampling freesurfer-HCP

2017-09-12 Thread Xavier Guell Paradis
Dear HCP experts,
I have an overlay freesurfer file (format is .w) which corresponds to a task 
activity surface map (group result). I have one .w file for each cerebral 
hemisphere. I would like to visualize these maps using wb_view, and have tried 
to follow the instructions you published 
(https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ#HCPUsersFAQ-9.HowdoImapdatabetweenFreeSurferandHCP?).
As a first step I would need to convert the .w files into .func.gii files using 
mris_convert. Then I would use wb_command -metric-resample, as indicated in 
your instructions. This does not seem to work with my .w file:

1) mris_convert myfile.w myfile.func.gii
This generates myfile.func.gii, but when I use wb_command -metric-resample I 
get the following error:
ERROR: Parse error while reading: error occurred while parsing element, line 
number: 1 column number: 1

2) As an alternative approach, I opened myfile.w using Tksurfer and saved the 
overlay (myfile.w) with .mgh format (generating a new file: "myfile.mgh"). Then 
I do the following:
mris_convert myfile.mgh myfile.func.gii
This generates the myfile.func.gii file, but when I use wb_command 
-metric-resample with this file I get a different error:
ERROR: All data arrays (columns) in the file must have the same number of rows. 
 The first array (column) contains 163842 rows.  Array 2 contains 327680 rows.

Thank you very much for your help,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Table indicating location of clusters according to a dlabel file?

2017-09-07 Thread Xavier Guell Paradis
Thank you Matt and Tim for the very useful comments!
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Thursday, September 07, 2017 5:25 PM
To: NEUROSCIENCE tim; Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Table indicating location of clusters according to a 
dlabel file?

Right.  Basically we are suspicious of defining areas based on statistical 
thresholds, as these are unlikely to reflect biological boundaries in the 
brain, but rather the vagaries of the statistical thresholding approach and the 
noise distribution.

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Timothy Coalson mailto:tsc...@mst.edu>>
Date: Thursday, September 7, 2017 at 4:22 PM
To: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Table indicating location of clusters according to a 
dlabel file?

The commands in wb_command are designed for scripting flexibility, they each do 
a small, low-level operation, to be chained together to achieve various tasks.  
However, they mainly output data files, there isn't much for text output 
currently.

You could use -cifti-parcellate to parcellate your cluster maps, and any parcel 
with a nonzero value therefore has some overlap - you can view this file on the 
surface and click on any nonzero patch to check what area it is.  You can also 
dump those values to text with -cifti-convert -to-text.  Running 
-file-information on the parcellated file or the dlabel file will give you the 
order of the parcel names.

We aren't big fans of thresholding, and we would also consider parcellating the 
timeseries before running the statistics, if your question is "which of this 
parcellation's areas are significantly activated?".

Tim


On Thu, Sep 7, 2017 at 3:59 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear HCP experts,
I have a thresholded functional connectivity map (dscalar), and the dlabel 
files from the Glasser 2016 multimodal cortical parcellation. I was wondering 
whether there is a wb_command that would automatically generate a table 
indicating which labels overlap with my functional connectivity map.
I have been exploring the wb_command index as well as the HCP mail archive and 
cannot find anything like this.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Table indicating location of clusters according to a dlabel file?

2017-09-07 Thread Xavier Guell Paradis
Dear HCP experts,
I have a thresholded functional connectivity map (dscalar), and the dlabel 
files from the Glasser 2016 multimodal cortical parcellation. I was wondering 
whether there is a wb_command that would automatically generate a table 
indicating which labels overlap with my functional connectivity map.
I have been exploring the wb_command index as well as the HCP mail archive and 
cannot find anything like this.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Calculate resting-state correlation (r or fisher-z) between two ROIs

2017-08-23 Thread Xavier Guell Paradis
Dear HCP experts,
I have three subcortical ROIs (three dlabel files). I would like to obtain a 
correlation value (r, or fisher z) of the resting-state time course between 
each pair or ROIs (e.g. three r values, one for each pair: "ROI_1-ROI_2", 
"ROI_1-ROI_3" and "ROI_2-ROI_3"). I would like to use your group average dconn 
file for this calculation.

I have been reading the instructions of many wb_commands but cannot figure out 
how to do this.
I have already saved three Fisher-z maps which correspond to the resting-state 
correlations from each of the three ROIs (using your group dconn file). Perhaps 
the most simple solution would be to average the values in these maps: e.g. 
take the correlation map from ROI_1 and average the values of all the voxels in 
ROI_2 and in ROI_3 (so that I would obtain two z values, corresponding to the 
"ROI_1-ROI_2" and "ROI_1-ROI_3" pairs). How could I do this?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] How can I convert subcortical nifti files to dlabel files?

2017-08-10 Thread Xavier Guell Paradis
Dear Tim and Matt,
Thank you very much for your help, now it works! I had my clusters registered 
to the S900_Average_T1w_restore.nii file, but they had to be registered to the 
MNI152_T1_2mm.nii.gz file.
Thank you very much,
Xavier.

From: Timothy Coalson [tsc...@mst.edu]
Sent: Wednesday, August 09, 2017 6:55 PM
To: Xavier Guell Paradis
Cc: Glasser, Matthew; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

If your "mycluster.nii" is in a different volume space than 222 MNI RPI space, 
then it may not let you use it that way.  Compare the nifti header against this 
file in the HCP pipelines, using fslhd or wb_command -nifti-information 
-print-header:

global/templates/MNI152_T1_2mm.nii.gz

If it is different, then you will need to resample your label volume.

Tim


On Wed, Aug 9, 2017 at 4:38 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear Tim,
Thank you for your reply
I have made sure that I have the group .dconn file opened, and I can see that 
it is correctly opened because I can see connectivity when I click on any 
structure.
However, I still have the same problem: I do not see the option of "Show Cifti 
Connectivity" when I right-click the cluster of the dlabel file I have created 
(however, I can see the dlabel file in the "Labels" list of the "Features 
Toolbox").

These are the steps I followed to create the dlabel file; I imagine I did 
something wrong in these steps but I cannot figure out what:
My original nifti file with the right cerebellum cluster is called 
"mycluster.nii".
I created the file "textforvolumelabelimport.txt" with the following text:
CEREBELLUM_RIGHT
1 1 1 1
Then I did the following:
wb_command -volume-label-import mycluster.nii textforvolumelabelimport.txt 
mycluster_label.nii
wb_command -cifti-create-label 
mycluster_labelStep2.dlabel.ni<http://mycluster_labelStep2.dlabel.ni>i -volume 
mycluster_label.nii mycluster_label.nii
The resulting dlabel file is "mycluster_labelStep2.dlabel.nii"

Thank you very much,
Xavier.


From: Timothy Coalson [tsc...@mst.edu<mailto:tsc...@mst.edu>]
Sent: Wednesday, August 09, 2017 4:57 PM
To: Xavier Guell Paradis
Cc: Glasser, Matthew; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>

Subject: Re: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

You need to have a dtseries (or dconn) file open before you can see any kind of 
connectivity.  If you only have the labels/rois open, how do you expect it to 
figure out connectivity information?

Note that we only have options for averaging things inside a label, the ROI 
file will not be useful in the GUI for this purpose.

Tim


On Wed, Aug 9, 2017 at 3:42 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear Matt,
Thank you for your reply. I have realized that a very curious thing happens:
- If I open the dlabel file and right-click the cluster in wb_view, I do not 
see any option
- If I open the dscalar file and right-click the cluster in wb_view, I do not 
see any option
- If I open the dlabel file AND the dscalar file and right-click the cluster in 
wb_view (note that the cluster is now present twice: in the dlabel file and in 
the dscalar file), I see the "Show Data/Time Series Graph" but not the "Show 
Cifti Connectivity" option.

I opened wb_view multiple times to make sure that this is true: I only see the 
"Show Data/Time Series Graph" once I have opened both files; but I still do not 
see the "Show Cifti Connectivity" option.

This is a strange pattern, but perhaps it is a clue to find out what I am doing 
wrong.

An extra piece of information that might be useful: when I open the dlabel 
file, I can see it listed in the "Labels" list of the "Features ToolBox".

Thank you very much,
Xavier.
________
From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Wednesday, August 09, 2017 2:13 PM
To: Xavier Guell Paradis; NEUROSCIENCE tim

Cc: hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

It should work if you skip the last step and use the dlabel file.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Wednesday, August 9, 2017 at 9:43 AM
To: Timothy Coalson mailto:tsc...@mst.edu>>, Matt Glasser 
mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

Dear Tim and Matt,
Thank you very much for your reply.

I tried -volume-label-import, followed

Re: [HCP-Users] How can I convert subcortical nifti files to dlabel files?

2017-08-09 Thread Xavier Guell Paradis
Dear Tim,
Thank you for your reply
I have made sure that I have the group .dconn file opened, and I can see that 
it is correctly opened because I can see connectivity when I click on any 
structure.
However, I still have the same problem: I do not see the option of "Show Cifti 
Connectivity" when I right-click the cluster of the dlabel file I have created 
(however, I can see the dlabel file in the "Labels" list of the "Features 
Toolbox").

These are the steps I followed to create the dlabel file; I imagine I did 
something wrong in these steps but I cannot figure out what:
My original nifti file with the right cerebellum cluster is called 
"mycluster.nii".
I created the file "textforvolumelabelimport.txt" with the following text:
CEREBELLUM_RIGHT
1 1 1 1
Then I did the following:
wb_command -volume-label-import mycluster.nii textforvolumelabelimport.txt 
mycluster_label.nii
wb_command -cifti-create-label mycluster_labelStep2.dlabel.nii -volume 
mycluster_label.nii mycluster_label.nii
The resulting dlabel file is "mycluster_labelStep2.dlabel.nii"

Thank you very much,
Xavier.


From: Timothy Coalson [tsc...@mst.edu]
Sent: Wednesday, August 09, 2017 4:57 PM
To: Xavier Guell Paradis
Cc: Glasser, Matthew; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

You need to have a dtseries (or dconn) file open before you can see any kind of 
connectivity.  If you only have the labels/rois open, how do you expect it to 
figure out connectivity information?

Note that we only have options for averaging things inside a label, the ROI 
file will not be useful in the GUI for this purpose.

Tim


On Wed, Aug 9, 2017 at 3:42 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear Matt,
Thank you for your reply. I have realized that a very curious thing happens:
- If I open the dlabel file and right-click the cluster in wb_view, I do not 
see any option
- If I open the dscalar file and right-click the cluster in wb_view, I do not 
see any option
- If I open the dlabel file AND the dscalar file and right-click the cluster in 
wb_view (note that the cluster is now present twice: in the dlabel file and in 
the dscalar file), I see the "Show Data/Time Series Graph" but not the "Show 
Cifti Connectivity" option.

I opened wb_view multiple times to make sure that this is true: I only see the 
"Show Data/Time Series Graph" once I have opened both files; but I still do not 
see the "Show Cifti Connectivity" option.

This is a strange pattern, but perhaps it is a clue to find out what I am doing 
wrong.

An extra piece of information that might be useful: when I open the dlabel 
file, I can see it listed in the "Labels" list of the "Features ToolBox".

Thank you very much,
Xavier.
________
From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Wednesday, August 09, 2017 2:13 PM
To: Xavier Guell Paradis; NEUROSCIENCE tim

Cc: hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

It should work if you skip the last step and use the dlabel file.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Wednesday, August 9, 2017 at 9:43 AM
To: Timothy Coalson mailto:tsc...@mst.edu>>, Matt Glasser 
mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

Dear Tim and Matt,
Thank you very much for your reply.

I tried -volume-label-import, followed by -cifti-create-label, followed by 
-cifti-all-labels-to-rois.
After this, when I right-click the cluster in wb_view, I see the option "Show 
Data/Time Series Graph For Parcel [my cluster]" but I do not see the option 
"Show Cifti Connectivity For Parcel [my cluster]" (even though I can see this 
option for other parcels, such as the Yeo map).
I have been trying different things but cannot figure it out.

Some extra information in case it is useful:

My clusters are a group average registered to the HCP 
"S900_Average_T1w_restore.nii", so at this point I am not concerned about 
comparison across subjects. I would like to calculate functional connectivity 
from each of my subcortical clusters using your S900 .dconn file.
My original nifti file with the right cerebellum cluster is called 
"mycluster.nii".
I created the file "textforvolumelabelimport.txt" with the following text:
CEREBELLUM_RIGHT
1 1 1 1
Then I did the following:
wb_command -volume-label-import mycluster.nii textforvolumelabelimport.txt 
mycluster_label.nii
wb_comma

Re: [HCP-Users] How can I convert subcortical nifti files to dlabel files?

2017-08-09 Thread Xavier Guell Paradis
Dear Matt,
Thank you for your reply. I have realized that a very curious thing happens:
- If I open the dlabel file and right-click the cluster in wb_view, I do not 
see any option
- If I open the dscalar file and right-click the cluster in wb_view, I do not 
see any option
- If I open the dlabel file AND the dscalar file and right-click the cluster in 
wb_view (note that the cluster is now present twice: in the dlabel file and in 
the dscalar file), I see the "Show Data/Time Series Graph" but not the "Show 
Cifti Connectivity" option.

I opened wb_view multiple times to make sure that this is true: I only see the 
"Show Data/Time Series Graph" once I have opened both files; but I still do not 
see the "Show Cifti Connectivity" option.

This is a strange pattern, but perhaps it is a clue to find out what I am doing 
wrong.

An extra piece of information that might be useful: when I open the dlabel 
file, I can see it listed in the "Labels" list of the "Features ToolBox".

Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Wednesday, August 09, 2017 2:13 PM
To: Xavier Guell Paradis; NEUROSCIENCE tim
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

It should work if you skip the last step and use the dlabel file.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Wednesday, August 9, 2017 at 9:43 AM
To: Timothy Coalson mailto:tsc...@mst.edu>>, Matt Glasser 
mailto:glass...@wustl.edu>>
Cc: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

Dear Tim and Matt,
Thank you very much for your reply.

I tried -volume-label-import, followed by -cifti-create-label, followed by 
-cifti-all-labels-to-rois.
After this, when I right-click the cluster in wb_view, I see the option "Show 
Data/Time Series Graph For Parcel [my cluster]" but I do not see the option 
"Show Cifti Connectivity For Parcel [my cluster]" (even though I can see this 
option for other parcels, such as the Yeo map).
I have been trying different things but cannot figure it out.

Some extra information in case it is useful:

My clusters are a group average registered to the HCP 
"S900_Average_T1w_restore.nii", so at this point I am not concerned about 
comparison across subjects. I would like to calculate functional connectivity 
from each of my subcortical clusters using your S900 .dconn file.
My original nifti file with the right cerebellum cluster is called 
"mycluster.nii".
I created the file "textforvolumelabelimport.txt" with the following text:
CEREBELLUM_RIGHT
1 1 1 1
Then I did the following:
wb_command -volume-label-import mycluster.nii textforvolumelabelimport.txt 
mycluster_label.nii
wb_command -cifti-create-label mycluster_labelStep2.dlabel.nii -volume 
mycluster_label.nii mycluster_label.nii
wb_command -cifti-all-labels-to-rois mycluster_labelStep2.dlabel.nii 1 
mycluster_labelStep3.dscalar.nii

When I right-click "mycluster_labelStep3.dscalar.nii" in wb_view, I can see the 
option of "Show Data/Time Series Graph" but not the option of "Show Cifti 
Connectivity".

Thank you very much,
Xavier.


From: Timothy Coalson [tsc...@mst.edu<mailto:tsc...@mst.edu>]
Sent: Tuesday, August 08, 2017 4:39 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

I'm assuming you want them to match a standard grayordinate space, so that they 
can be compared across subjects.

The simple way that doesn't account for residual subject differences in 
subcortical locations is to first resample the data to the appropriate 
resolution/orientation MNI space (222 for the 91k grayordinates), then use 
-cifti-create-dense-from-template with the -volume-all option.

The better but more involved way is to take the subject's subcortical structure 
labels from freesurfer, import them to workbench format with the names that 
-cifti-create-label specifies, use -cifti-create-label to make a 
subject-specific cifti file (you will also need to provide some dummy surface 
data for the next step to work), and then use -cifti-resample to use only the 
same-structure-overlap information, and dilate to fill in any holes if desired.

We use this second method for fMRI data in the pipelines, see here:

https://github.com/Washington-University/Pipelines/blob/master/fMRISurface/scripts/SubcorticalProcessing.sh#L40

Though that script actually only outputs a volume file, and therefore it 
doesn't bother with having surf

Re: [HCP-Users] How can I convert subcortical nifti files to dlabel files?

2017-08-09 Thread Xavier Guell Paradis
Dear Tim and Matt,
Thank you very much for your reply.

I tried -volume-label-import, followed by -cifti-create-label, followed by 
-cifti-all-labels-to-rois.
After this, when I right-click the cluster in wb_view, I see the option "Show 
Data/Time Series Graph For Parcel [my cluster]" but I do not see the option 
"Show Cifti Connectivity For Parcel [my cluster]" (even though I can see this 
option for other parcels, such as the Yeo map).
I have been trying different things but cannot figure it out.

Some extra information in case it is useful:

My clusters are a group average registered to the HCP 
"S900_Average_T1w_restore.nii", so at this point I am not concerned about 
comparison across subjects. I would like to calculate functional connectivity 
from each of my subcortical clusters using your S900 .dconn file.
My original nifti file with the right cerebellum cluster is called 
"mycluster.nii".
I created the file "textforvolumelabelimport.txt" with the following text:
CEREBELLUM_RIGHT
1 1 1 1
Then I did the following:
wb_command -volume-label-import mycluster.nii textforvolumelabelimport.txt 
mycluster_label.nii
wb_command -cifti-create-label mycluster_labelStep2.dlabel.nii -volume 
mycluster_label.nii mycluster_label.nii
wb_command -cifti-all-labels-to-rois mycluster_labelStep2.dlabel.nii 1 
mycluster_labelStep3.dscalar.nii

When I right-click "mycluster_labelStep3.dscalar.nii" in wb_view, I can see the 
option of "Show Data/Time Series Graph" but not the option of "Show Cifti 
Connectivity".

Thank you very much,
Xavier.


From: Timothy Coalson [tsc...@mst.edu]
Sent: Tuesday, August 08, 2017 4:39 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] How can I convert subcortical nifti files to dlabel 
files?

I'm assuming you want them to match a standard grayordinate space, so that they 
can be compared across subjects.

The simple way that doesn't account for residual subject differences in 
subcortical locations is to first resample the data to the appropriate 
resolution/orientation MNI space (222 for the 91k grayordinates), then use 
-cifti-create-dense-from-template with the -volume-all option.

The better but more involved way is to take the subject's subcortical structure 
labels from freesurfer, import them to workbench format with the names that 
-cifti-create-label specifies, use -cifti-create-label to make a 
subject-specific cifti file (you will also need to provide some dummy surface 
data for the next step to work), and then use -cifti-resample to use only the 
same-structure-overlap information, and dilate to fill in any holes if desired.

We use this second method for fMRI data in the pipelines, see here:

https://github.com/Washington-University/Pipelines/blob/master/fMRISurface/scripts/SubcorticalProcessing.sh#L40

Though that script actually only outputs a volume file, and therefore it 
doesn't bother with having surface data in those temporary cifti files.

Tim


On Tue, Aug 8, 2017 at 3:19 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear HCP experts,
I have several subcortical nifti files, each containing one cluster. I would 
like to convert them to dlabel files, so that then I can use wb_view to see the 
functional connectivity from each of these clusters (using your group .dconn 
file).

How can I convert subcortical nifti files to dlabel files?
I have been exploring several wb_commands but I cannot figure it out.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] How can I convert subcortical nifti files to dlabel files?

2017-08-08 Thread Xavier Guell Paradis
Dear HCP experts,
I have several subcortical nifti files, each containing one cluster. I would 
like to convert them to dlabel files, so that then I can use wb_view to see the 
functional connectivity from each of these clusters (using your group .dconn 
file).

How can I convert subcortical nifti files to dlabel files?
I have been exploring several wb_commands but I cannot figure it out.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] saving one of the .dconn maps as a separate cifti?

2017-07-13 Thread Xavier Guell Paradis
This is very useful, thank you very much for the quick and very helpful reply!
Xavier.

From: Elam, Jennifer [e...@wustl.edu]
Sent: Thursday, July 13, 2017 11:21 AM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] saving one of the .dconn maps as a separate cifti?


Hi Xavier,

When you have the .dconn file open in wb_view and have clicked on the desired 
seed location, click on the Connectivity tab in the Overlay Toolbox, then click 
the "Copy" button  next to the loaded .dconn file listed.

[https://outlook.office.com/owa/?realm=wustl.edu]

Then go to File -> Save/Manage Files. Your map of the connectivity should be 
listed there as a .dscalar file with the Save checkbox already checked. Click 
Save Checked Files to save with the default name, or click on the gear button 
in the More column to set a new file name, then click Save Checked Files.


Let me know if you'd like some screenshots-- I don't think they will come 
through on the list.


Best,

Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu<mailto:e...@wustl.edu>
www.humanconnectome.org<http://www.humanconnectome.org/>




From: hcp-users-boun...@humanconnectome.org 
 on behalf of Xavier Guell Paradis 

Sent: Thursday, July 13, 2017 9:56 AM
To: hcp-users@humanconnectome.org
Subject: [HCP-Users] saving one of the .dconn maps as a separate cifti?

Dear HCP experts,
I have the 33GB .dconn file, have clicked in one place in the cerebral cortex 
and now I am seeing a connectivity map in the cerebral cortex and subcortical 
structures. The map is called "Row: 27597, Node Index: 30393, Structure: 
CORTEX_LEFT".
Is it possible to access this map separately as an independent file, or to save 
this map that I am seeing as a separate cifti file?
I have been playing with "Save/Manage files" but cannot figure it out. I 
thought I could try -cifti-separate and look for the file called "Row: 27597, 
Node Index: 30393, Structure: CORTEX_LEFT", but -cifti-separate seems to 
require a lot of processing memory.

Thank you very much,
Xavier.

Xavier Guell Paradis, M.D.
Research Fellow
Massachusetts Institute of Technology
McGovern Institute for Brain Research
Office: 46-4033A
Phone: (617) 324-4355
Email: xavie...@mit.edu

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] saving one of the .dconn maps as a separate cifti?

2017-07-13 Thread Xavier Guell Paradis
Dear HCP experts,
I have the 33GB .dconn file, have clicked in one place in the cerebral cortex 
and now I am seeing a connectivity map in the cerebral cortex and subcortical 
structures. The map is called "Row: 27597, Node Index: 30393, Structure: 
CORTEX_LEFT".
Is it possible to access this map separately as an independent file, or to save 
this map that I am seeing as a separate cifti file?
I have been playing with "Save/Manage files" but cannot figure it out. I 
thought I could try -cifti-separate and look for the file called "Row: 27597, 
Node Index: 30393, Structure: CORTEX_LEFT", but -cifti-separate seems to 
require a lot of processing memory.

Thank you very much,
Xavier.

Xavier Guell Paradis, M.D.
Research Fellow
Massachusetts Institute of Technology
McGovern Institute for Brain Research
Office: 46-4033A
Phone: (617) 324-4355
Email: xavie...@mit.edu

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Convert a .dscalar.nii to multiple nifti files?

2017-07-05 Thread Xavier Guell Paradis
It works, thank you very much!
Xavier.

From: Harms, Michael [mha...@wustl.edu]
Sent: Wednesday, July 05, 2017 12:09 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Convert a .dscalar.nii to multiple nifti files?


In that case, I believe what you need is -cifti-separate

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Wednesday, July 5, 2017 at 11:07 AM
To: Michael Harms mailto:mha...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] Convert a .dscalar.nii to multiple nifti files?

Thank you very much for the reply.
The .dscalar.nii file that I am interesting in converting to multiple nifti 
files contains only cerebellar data, which is not surface information.
Would there be any way of converting that to multiple nifti files?

Thank you very much,
Xavier.

From: Harms, Michael [mha...@wustl.edu<mailto:mha...@wustl.edu>]
Sent: Wednesday, July 05, 2017 12:02 PM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Convert a .dscalar.nii to multiple nifti files?


Hi,
You can’t represent the surface information in a .dscalar.nii via a nifti file.
The -cifti-convert -to-nifti command exists for converting a CIFTI to a 
“fake”-NIFTI file for use in external tools/analyses that can be conducted on a 
per-grayordinate basis (i.e., without any regard to spatial information).

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu<mailto:mha...@wustl.edu>

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Wednesday, July 5, 2017 at 10:55 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Convert a .dscalar.nii to multiple nifti files?

Dear HCP experts,
Is it possible to convert a .dscalar.nii which contains 4 different maps to 4 
different nifti files (one for each map)?
I have tried -cifti-convert -to-nifti but the output was a very strange nifti 
file.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Convert a .dscalar.nii to multiple nifti files?

2017-07-05 Thread Xavier Guell Paradis
Thank you very much for the reply.
The .dscalar.nii file that I am interesting in converting to multiple nifti 
files contains only cerebellar data, which is not surface information.
Would there be any way of converting that to multiple nifti files?

Thank you very much,
Xavier.

From: Harms, Michael [mha...@wustl.edu]
Sent: Wednesday, July 05, 2017 12:02 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Convert a .dscalar.nii to multiple nifti files?


Hi,
You can’t represent the surface information in a .dscalar.nii via a nifti file.
The -cifti-convert -to-nifti command exists for converting a CIFTI to a 
“fake”-NIFTI file for use in external tools/analyses that can be conducted on a 
per-grayordinate basis (i.e., without any regard to spatial information).

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Wednesday, July 5, 2017 at 10:55 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Convert a .dscalar.nii to multiple nifti files?

Dear HCP experts,
Is it possible to convert a .dscalar.nii which contains 4 different maps to 4 
different nifti files (one for each map)?
I have tried -cifti-convert -to-nifti but the output was a very strange nifti 
file.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Convert a .dscalar.nii to multiple nifti files?

2017-07-05 Thread Xavier Guell Paradis
Dear HCP experts,
Is it possible to convert a .dscalar.nii which contains 4 different maps to 4 
different nifti files (one for each map)?
I have tried -cifti-convert -to-nifti but the output was a very strange nifti 
file.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] modify dconn file so that it only includes cerebellum values?

2017-05-05 Thread Xavier Guell Paradis
Thank you very much!

From: Timothy Coalson [tsc...@mst.edu]
Sent: Friday, May 05, 2017 4:58 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] modify dconn file so that it only includes cerebellum 
values?

The command to do this is -cifti-separate with the -volume option and -roi 
suboption, and the CEREBELLUM_LEFT and CEREBELLUM_RIGHT structures (we don't 
use the CEREBELLUM label in the volume, instead we split the hemispheres), 
however, it always outputs the data from the requested structure also, so you 
don't want to run it on a dconn (especially not for volume structures).  
Instead, run it on any dscalar or small dtseries file you have lying around 
that uses the same dense mapping (or use -cifti-reduce to make one from the 
dconn).  Alternatively, if you are using a standard grayordinate space, you can 
extract the cerebellum halves from the volume label file in the Pipelines 
repository that defines the grayordinate voxels, with -volume-label-to-roi, 
like this:

wb_command -volume-label-to-roi 
Pipelines/global/templates/91282_Greyordinates/Atlas_ROIs.2.nii.gz 
cerebellum_left_roi.nii.gz -name CEREBELLUM_LEFT
wb_command -volume-label-to-roi 
Pipelines/global/templates/91282_Greyordinates/Atlas_ROIs.2.nii.gz 
cerebellum_right_roi.nii.gz -name CEREBELLUM_RIGHT

Since either way of getting these ROIs has the halves separate, you then need 
to combine them with -volume-math:

wb_command -volume-math 'x || y' cerebellum_all_roi.nii.gz -var x 
cerebellum_left_roi.nii.gz -var y cerebellum_right_roi.nii.gz

Tim


On Fri, May 5, 2017 at 3:42 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Thank you for your reply.
Runing -cifti-restrict-dense-map again with ROW direction has worked; now I see 
cerebellum data only. However, I think it would be "cleaner" to do as you 
suggested and "get the ROI of where cerebellum data exists from the cifti 
file". How could I do this?
Thank you very much,
Xavier.

From: Timothy Coalson [tsc...@mst.edu<mailto:tsc...@mst.edu>]
Sent: Friday, May 05, 2017 4:32 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] modify dconn file so that it only includes cerebellum 
values?

On Fri, May 5, 2017 at 3:09 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear HCP experts,
I am trying to modify the group average dconn file 
(HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii) so that it only 
includes data from the cerebellum (even though, of course, these data will 
correspond to the connectivity of each cerebellum voxel to the rest of the 
brain).
I have tried -cifti-restrict-dense-map inputfile COLUMN outputfile -vol-roi 
cerebellumatlas.nii
("cerebellumatlas.nii" is a cerebellum volume atlas which contains values for 
the cerebellum only)
This has not worked.

That should work, but you need to run it again on the output of that, this time 
with the ROW direction, so that both directions are cerebellum-only.  Please be 
more specific than "has not worked", did you get an error (and if so, copy the 
error message), or did its output not match what you expected?

I have also tried to use the -cerebellum-roi option of 
-cifti-restrict-dense-map, without writing any metric file after 
-cerebellum-roi. This also doesn't work.

This should have caused an error, as the current grayordinates space doesn't 
use surfaces for cerebellum.  Additionally, if you don't provide a required 
argument to an option, you will get a different kind of error.

Is there any way to tell wb_command that I only want to keep the cerebellar 
data, without having to include any file which indicates where the cerebellum 
is?

No, this isn't a use case we expected, normally we want to match existing cifti 
mappings (for instance, 91k grayordinates), not make new ones.  It is possible 
to get the ROI of where cerebellum data exists from the cifti file, but since 
you say you already have that ROI...on the other hand, if the specific error 
you got was something like "volume space doesn't match", then you actually 
should derive the ROI from the cifti file, rather than using whatever you have.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] modify dconn file so that it only includes cerebellum values?

2017-05-05 Thread Xavier Guell Paradis
Thank you for your reply.
Runing -cifti-restrict-dense-map again with ROW direction has worked; now I see 
cerebellum data only. However, I think it would be "cleaner" to do as you 
suggested and "get the ROI of where cerebellum data exists from the cifti 
file". How could I do this?
Thank you very much,
Xavier.

From: Timothy Coalson [tsc...@mst.edu]
Sent: Friday, May 05, 2017 4:32 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] modify dconn file so that it only includes cerebellum 
values?

On Fri, May 5, 2017 at 3:09 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear HCP experts,
I am trying to modify the group average dconn file 
(HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii) so that it only 
includes data from the cerebellum (even though, of course, these data will 
correspond to the connectivity of each cerebellum voxel to the rest of the 
brain).
I have tried -cifti-restrict-dense-map inputfile COLUMN outputfile -vol-roi 
cerebellumatlas.nii
("cerebellumatlas.nii" is a cerebellum volume atlas which contains values for 
the cerebellum only)
This has not worked.

That should work, but you need to run it again on the output of that, this time 
with the ROW direction, so that both directions are cerebellum-only.  Please be 
more specific than "has not worked", did you get an error (and if so, copy the 
error message), or did its output not match what you expected?

I have also tried to use the -cerebellum-roi option of 
-cifti-restrict-dense-map, without writing any metric file after 
-cerebellum-roi. This also doesn't work.

This should have caused an error, as the current grayordinates space doesn't 
use surfaces for cerebellum.  Additionally, if you don't provide a required 
argument to an option, you will get a different kind of error.

Is there any way to tell wb_command that I only want to keep the cerebellar 
data, without having to include any file which indicates where the cerebellum 
is?

No, this isn't a use case we expected, normally we want to match existing cifti 
mappings (for instance, 91k grayordinates), not make new ones.  It is possible 
to get the ROI of where cerebellum data exists from the cifti file, but since 
you say you already have that ROI...on the other hand, if the specific error 
you got was something like "volume space doesn't match", then you actually 
should derive the ROI from the cifti file, rather than using whatever you have.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] modify dconn file so that it only includes cerebellum values?

2017-05-05 Thread Xavier Guell Paradis
Dear HCP experts,
I am trying to modify the group average dconn file 
(HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii) so that it only 
includes data from the cerebellum (even though, of course, these data will 
correspond to the connectivity of each cerebellum voxel to the rest of the 
brain).
I have tried -cifti-restrict-dense-map inputfile COLUMN outputfile -vol-roi 
cerebellumatlas.nii
("cerebellumatlas.nii" is a cerebellum volume atlas which contains values for 
the cerebellum only)
This has not worked.
I have also tried to use the -cerebellum-roi option of 
-cifti-restrict-dense-map, without writing any metric file after 
-cerebellum-roi. This also doesn't work.

Is there any way to tell wb_command that I only want to keep the cerebellar 
data, without having to include any file which indicates where the cerebellum 
is?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] merge multiple dtseries.nii into one dscalar.nii?

2017-05-05 Thread Xavier Guell Paradis
It worked, thank you very much.
Xavier.

From: Timothy Coalson [tsc...@mst.edu]
Sent: Thursday, May 04, 2017 5:42 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] merge multiple dtseries.nii into one dscalar.nii?

Use -cifti-merge to make a concatenated dtseries, then you can use 
-cifti-change-mapping to convert it to dscalar.

Tim


On Thu, May 4, 2017 at 3:22 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear HCP experts,
Is there a way I can create a dscalar.nii file which contains four 
cope.dtseries.nii files as four different maps?
I have been exploring different possibilities with wb_command but can't find 
the way to do it.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] merge multiple dtseries.nii into one dscalar.nii?

2017-05-04 Thread Xavier Guell Paradis
Dear HCP experts,
Is there a way I can create a dscalar.nii file which contains four 
cope.dtseries.nii files as four different maps?
I have been exploring different possibilities with wb_command but can't find 
the way to do it.

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Number of twins in group of subjects: would this be considered "publishing restricted data"?

2017-03-08 Thread Xavier Guell Paradis
Dear HCP experts,
If we reported in a publication the number of twins in a group of subjects, 
would this be considered "publishing restricted data"? (e.g., "800 subjects 
completed all tasks. This group included 100 pairs of twins")

Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Memory required for -cifti-correlation of 700 subjects?

2017-02-24 Thread Xavier Guell Paradis
Hi Michael,
Thank you for your message. I am interested in calculating resting-state 
functional connectivity in a group including only the subjects who completed 
all tasks (n=787). This is why I would like to generate a new dconn file.
Thank you,
Xavier.

From: Harms, Michael [mha...@wustl.edu]
Sent: Friday, February 24, 2017 9:44 AM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Memory required for -cifti-correlation of 700 subjects?


Hi,
Let’s step back.  Why can’t you use the group dense connectome that we’ve 
already computed and provided?

As noted in our documentation
https://www.humanconnectome.org/documentation/S900/820_Group-average_rfMRI_Connectivity_December2015.pdf
computing the dense connectome optimally is not trivial (and involves quite a 
bit more than a -cifti-correlation operation).

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Friday, February 24, 2017 at 8:30 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Memory required for -cifti-correlation of 700 subjects?

Dear HCP experts,
After demeaning and merging resting-state files of 700 subjects (resulting in a 
1200GB file), I would like to do -cifti-correlation to get a .dconn file. I am 
using a computational cluster, and even by using a node with 300GB of memory 
the command does not seem to work (I get the message: "Exceeded job memory 
limit, Job step aborted: waiting up to 32 seconds for job step to finish"). I 
have tried to use no -mem-limit as well as a -mem-limit as low as 5, and I 
still get the same message.

Do you know if it is possible to use -cifti-correlation for a huge file (700 
subjects merged); and if so what level of memory would be required to do this?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Memory required for -cifti-correlation of 700 subjects?

2017-02-24 Thread Xavier Guell Paradis
Dear HCP experts,
After demeaning and merging resting-state files of 700 subjects (resulting in a 
1200GB file), I would like to do -cifti-correlation to get a .dconn file. I am 
using a computational cluster, and even by using a node with 300GB of memory 
the command does not seem to work (I get the message: "Exceeded job memory 
limit, Job step aborted: waiting up to 32 seconds for job step to finish"). I 
have tried to use no -mem-limit as well as a -mem-limit as low as 5, and I 
still get the same message.

Do you know if it is possible to use -cifti-correlation for a huge file (700 
subjects merged); and if so what level of memory would be required to do this?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] d maps from cope files: first merge and then demean, or first demean and then merge?

2017-02-15 Thread Xavier Guell Paradis
Dear HCP experts,
I have been calculating group Cohen's d maps as follows:
1) -cifti-merge all level2 cope files of a task contrast
2) -cifti-reduce mean, -cifti-reduce stdev and -cifti-math mean/stdev

Is this the correct order, or should I have demeaned each individual subject 
before using -cifti-merge?

I am asking this because I read that "you should never temporally concatenate 
without first demeaning the individual timeseries" here: 
http://www.mail-archive.com/hcp-users@humanconnectome.org/msg00444.html

Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Edit colors of palette in wb view?

2017-02-13 Thread Xavier Guell Paradis
It worked, thank you very much!
Xavier.

From: hcp-users-boun...@humanconnectome.org 
[hcp-users-boun...@humanconnectome.org] on behalf of Harwell, John 
[jharw...@wustl.edu]
Sent: Monday, February 13, 2017 3:38 PM
To: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Edit colors of palette in wb view?

Your file appears to be a NIFTI volume file.  Use the command “wb_command 
-volume-label-import”.  This command adds labels to the volume file’s header 
and creates a new NIFTI volume that is viewable in wb_view.

On Feb 13, 2017, at 1:43 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:

Dear John,
My file is a cerebellum map: 
Buckner2011_7Networks_MNI152_FreeSurferConformed1mm_LooseMask.nii. This file 
contains integers from 1 to 7 for the cerebellar structures.
I have tried -cifti-label-import and -metric-label-import but they do not seem 
to work. Which wb command would you recommend?
Thank you very much,
Xavier.


From: 
hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>
 
[hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>]
 on behalf of Harwell, John [jharw...@wustl.edu<mailto:jharw...@wustl.edu>]
Sent: Monday, February 13, 2017 1:07 PM
To: hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: [HCP-Users] Edit colors of palette in wb view?

Hello,

You will need to convert your data to the CIFTI Label File format using 
“wb_command".  A label table contains a group of labels and each label consists 
of a text name, RGB color components, and an integer value.  When the data is 
displayed, coloring is performed by matching data values to the integer values 
in the label table.

Without knowing the format of your data, I am unable to suggest the best 
“wb_command” for you to use but one of these may be useful:
* wb_command -cifti-create-label
* wb_command -cifti-label-import
* wb_command -metric-label-import

Related questions have been asked previously and these are links to them from 
the HCP user archives:
* http://www.mail-archive.com/hcp-users@humanconnectome.org/msg02286.html
* http://www.mail-archive.com/hcp-users@humanconnectome.org/msg01919.html
* http://www.mail-archive.com/hcp-users@humanconnectome.org/msg02548.html

John Harwell



From: 
hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>
 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Sent: Monday, February 13, 2017 11:05 AM
To: hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: [HCP-Users] Edit colors of palette in wb view?

Dear HCP experts,
Is it possible to edit the colors of the palette in wb view? For example, is it 
possible to define color A, color B and color C and tell wb view to show 
values=1 in color A, values=2 in color B, values=3 in color C, etc.?
Thank you very much,
Xavier.
___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Edit colors of palette in wb view?

2017-02-13 Thread Xavier Guell Paradis
Dear John,
My file is a cerebellum map: 
Buckner2011_7Networks_MNI152_FreeSurferConformed1mm_LooseMask.nii. This file 
contains integers from 1 to 7 for the cerebellar structures.
I have tried -cifti-label-import and -metric-label-import but they do not seem 
to work. Which wb command would you recommend?
Thank you very much,
Xavier.


From: hcp-users-boun...@humanconnectome.org 
[hcp-users-boun...@humanconnectome.org] on behalf of Harwell, John 
[jharw...@wustl.edu]
Sent: Monday, February 13, 2017 1:07 PM
To: hcp-users@humanconnectome.org
Subject: [HCP-Users] Edit colors of palette in wb view?

Hello,

You will need to convert your data to the CIFTI Label File format using 
“wb_command".  A label table contains a group of labels and each label consists 
of a text name, RGB color components, and an integer value.  When the data is 
displayed, coloring is performed by matching data values to the integer values 
in the label table.

Without knowing the format of your data, I am unable to suggest the best 
“wb_command” for you to use but one of these may be useful:
* wb_command -cifti-create-label
* wb_command -cifti-label-import
* wb_command -metric-label-import

Related questions have been asked previously and these are links to them from 
the HCP user archives:
* http://www.mail-archive.com/hcp-users@humanconnectome.org/msg02286.html
* http://www.mail-archive.com/hcp-users@humanconnectome.org/msg01919.html
* http://www.mail-archive.com/hcp-users@humanconnectome.org/msg02548.html

John Harwell



From: 
hcp-users-boun...@humanconnectome.org<mailto:hcp-users-boun...@humanconnectome.org>
 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Sent: Monday, February 13, 2017 11:05 AM
To: hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: [HCP-Users] Edit colors of palette in wb view?

Dear HCP experts,
Is it possible to edit the colors of the palette in wb view? For example, is it 
possible to define color A, color B and color C and tell wb view to show 
values=1 in color A, values=2 in color B, values=3 in color C, etc.?
Thank you very much,
Xavier.
___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Edit colors of palette in wb view?

2017-02-13 Thread Xavier Guell Paradis
Dear HCP experts,
Is it possible to edit the colors of the palette in wb view? For example, is it 
possible to define color A, color B and color C and tell wb view to show 
values=1 in color A, values=2 in color B, values=3 in color C, etc.?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Expected date of 1,200 subjects release?

2017-02-11 Thread Xavier Guell Paradis
Dear Jenn,
Thank you for your reply. Will this release include all level 2 and level 3 
analyses, or only the preprocessed or raw data of the individual subjects?
Thank you,
Xavier.

From: Elam, Jennifer [e...@wustl.edu]
Sent: Saturday, February 11, 2017 1:12 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: Expected date of 1,200 subjects release?

Hi Xavier,
Look for it in the next week or two, barring any last minute issues.

Best,
Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org


From: hcp-users-boun...@humanconnectome.org 
 on behalf of Xavier Guell Paradis 

Sent: Saturday, February 11, 2017 9:07:07 AM
To: hcp-users@humanconnectome.org
Subject: [HCP-Users] Expected date of 1,200 subjects release?

Dear HCP tema,
Is there an expected date for the release of the 1,200 subjects data?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Expected date of 1,200 subjects release?

2017-02-11 Thread Xavier Guell Paradis
Dear HCP tema,
Is there an expected date for the release of the 1,200 subjects data?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Motor task contrats: no RH-cue, LH-cue, etc.?

2017-02-07 Thread Xavier Guell Paradis
Hi Greg,
Thank you for your reply. Would you recommend the following commands to analyze 
the data as you suggested?:

1) -cifti-merge all level2 cope files of RH (create 
"mergedcopeMOTORRHONLY.dtseries.nii")

2) -cifti-merge all level2 cope files of CUE (create 
"mergedcopeMOTORCUEONLY.dtseries.nii")

3) calculate RH minus CUE cope file as follows:
-cifti-math '(x) - (y)' /mergedcopeMOTORRHMINUSCUE.dtseries.nii -var x 
/mergedcopeMOTORRHONLY.dtseries.nii -var y /mergedcopeMOTORCUEONLY.dtseries.nii

4) then obtain Cohen's d as follows:
-cifti-reduce /mergedcopeMOTORTMINUSCUE.dtseries.nii MEAN 
/meancopeMOTORTMINUSCUE.dscalar.nii;
-cifti-reduce /mergedcopeMOTORHMINUSCUE.dtseries.nii STDEV 
/stdevcopeMOTORTMINUSCUE.dscalar.nii;
-cifti-math '(mean) / stdev' /cohendmapMOTORTMINUSCUE.dscalar.nii -var mean 
/meancopeMOTORTMINUSCUE.dscalar.nii -var stdev 
/stdevcopeMOTORTMINUSCUE.dscalar.nii

Thank you very much,
Xavier.

From: Burgess, Gregory [gburg...@wustl.edu]
Sent: Monday, February 06, 2017 5:18 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Motor task contrats: no RH-cue, LH-cue, etc.?

I thought that contrasting each effector against the average of the others 
(e.g., RH-AVG) was a more-effective control to isolate motor-specific regions. 
If you are still interested in contrasting each effector versus the cue 
(controlling for visual activation without controlling for other task-related 
processes), it is possible for you to create it yourself by subtracting the 
cope maps for ‘CUE’ from the cope for each effector.

--Greg


Greg Burgess, Ph.D.
Staff Scientist, Human Connectome Project
Washington University School of Medicine
Department of Psychiatry
Phone: 314-362-7864
Email: gburg...@wustl.edu

> On Feb 3, 2017, at 4:25 PM, Xavier Guell Paradis  wrote:
>
> Dear HCP experts,
> I was wondering if there is any reason why motor contrasts of one motor task 
> minus cue (e.g. RH-Cue) were not calculated.
> Thank you very much,
> Xavier.
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Motor task contrats: no RH-cue, LH-cue, etc.?

2017-02-03 Thread Xavier Guell Paradis
Dear HCP experts,
I was wondering if there is any reason why motor contrasts of one motor task 
minus cue (e.g. RH-Cue) were not calculated.
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Downloading HCP data to a shared computing cluster

2017-02-03 Thread Xavier Guell Paradis
Dear HCP experts,
I was wondering if it is possible to download HCP data directly to a shared 
computing cluster (e.g. the openmind.mit.edu cluster). Since data is downloaded 
via Aspera Connect, there is no link that one can copy and paste to download 
data directly to the cluster.
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] 900 group average: are individual subjects the "S500 Release Subjects" + the "S900 New Subjects"

2017-02-03 Thread Xavier Guell Paradis
Thank you!
I noticed (and this is also seen in the image Jenn attached) that the total 
number of subjects with 100% tasks and 100% resting state is 788 (not 787, 
which is the number of subjects in the S900 group average). Is there a subject 
that was excluded from the S900 group average for another reason?
Thank you very much,
Xavier.

From: Elam, Jennifer [e...@wustl.edu]
Sent: Friday, February 03, 2017 11:12 AM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] 900 group average: are individual subjects the "S500 
Release Subjects" + the "S900 New Subjects"


Mike just answered while I was writing this, but for everyone's benefit who 
need to do something similar:

To get the list of subjects that were included in the S900 group average task 
analysis, you need to go to the subject dashboard (click on Explore All Family 
Subjects to get there from the Public Connectome Data page) and filter all 
subjects for those that have 3T Resting state Percent complete = 100 AND 3T 
Task fMRI Percent Complete = 100, like this:

[cid:9cc36f61-309e-453e-9e8e-95b6de6d46a9]

Once you have that filter set, you can click "Save Group" to save this group 
and "Download CSV" to download this group and all it's behavioral and 
demographic data in a spreadsheet.


Best,
Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu<mailto:e...@wustl.edu>
www.humanconnectome.org<http://www.humanconnectome.org/>


____
From: hcp-users-boun...@humanconnectome.org 
 on behalf of Xavier Guell Paradis 

Sent: Friday, February 3, 2017 9:41:31 AM
To: hcp-users@humanconnectome.org
Subject: [HCP-Users] 900 group average: are individual subjects the "S500 
Release Subjects" + the "S900 New Subjects"

Dear HCP experts,
In order to get the level2 data from the individual subjects which were 
included in the 900 subjects group average, should we put together the subjects 
of the "S500 Release Subjects" with the "S900 New Subjects"?
Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] 900 group average: are individual subjects the "S500 Release Subjects" + the "S900 New Subjects"

2017-02-03 Thread Xavier Guell Paradis
Dear HCP experts,
In order to get the level2 data from the individual subjects which were 
included in the 900 subjects group average, should we put together the subjects 
of the "S500 Release Subjects" with the "S900 New Subjects"?
Thank you,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] How to know cluster size (number of voxels) after -cifti-find-clusters

2017-02-02 Thread Xavier Guell Paradis
Dear Tim,
Thank you very much for your message. In the last step (-cifti-weighted-stats), 
I am not sure which file to use for the -cerebellum-area-metric. I would like 
to obtain cerebellar cluster sizes in mm^3. Would it be possible to use a 
cerebellar volumetric atlas such as Cerebellum-MNIfnirt-maxprob-thr25.nii  
(this is an atlas available here 
http://www.diedrichsenlab.org/imaging/propatlas.htm), so that the output of 
-cifti-weighted-stats says in which cerebellar lobule(s) each cluster is found?

Thank you very much,
Xavier.

From: Timothy Coalson [tsc...@mst.edu]
Sent: Monday, January 30, 2017 6:18 PM
To: Xavier Guell Paradis
Cc: hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] How to know cluster size (number of voxels) after 
-cifti-find-clusters

The -cifti-find-clusters command assigns a separate integer within each 
cluster.  You can use -cifti-label-import and then -cifti-all-labels-to-rois to 
get each cluster in a separate map.  Then, -cifti-weighted-stats with 
-spatial-weights and -sum will give you mm^2 for surface clusters and mm^3 for 
volume clusters.  Unfortunately, it is not easy to tell from the command line 
whether each cluster is on the surface or in the volume.  However, you could 
make a cifti file using the output of -surface-wedge-volume and a volume file 
containing the voxel volume, and use that in -cifti-weighted-stats with 
-cifti-weights and -sum to get both surface and volume clusters in mm^3 (which 
assumes that surface clusters are the full width of the ribbon).

Tim


On Mon, Jan 30, 2017 at 4:26 PM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:
Dear HCP experts,
After using -cifti-find-clusters, is there a way to know the size of the 
clusters that the command has found? We know that the clusters will be larger 
than the specified volume-value-threshold, but is there a way to know the mm^3 
or number of voxels of the clusters identified?

Thank you very much,
Xavier.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] How many voxels in the cerebellum?

2017-02-02 Thread Xavier Guell Paradis
Great, thank you very much!!

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Thursday, February 02, 2017 11:34 AM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] How many voxels in the cerebellum?

2mm standard CIFTI files (with 91282 grayordinates) have the following:

CortexLeft: 29696 out of 32492 vertices
CortexRight:29716 out of 32492 vertices
AccumbensLeft:  135 voxels
AccumbensRight: 140 voxels
AmygdalaLeft:   315 voxels
AmygdalaRight:  332 voxels
BrainStem:  3472 voxels
CaudateLeft:728 voxels
CaudateRight:   755 voxels
CerebellumLeft: 8709 voxels
CerebellumRight:9144 voxels
DiencephalonVentralLeft:706 voxels
DiencephalonVentralRight:   712 voxels
HippocampusLeft:764 voxels
HippocampusRight:   795 voxels
PallidumLeft:   297 voxels
PallidumRight:  260 voxels
PutamenLeft:1060 voxels
PutamenRight:   1010 voxels
ThalamusLeft:   1288 voxels
ThalamusRight:  1248 voxels

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, February 2, 2017 at 10:30 AM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] How many voxels in the cerebellum?

This one: HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMAll.dscalar

Thanks,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Thursday, February 02, 2017 11:28 AM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] How many voxels in the cerebellum?

What file?

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, February 2, 2017 at 10:26 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] How many voxels in the cerebellum?

Dear HCP experts,
Is there a way to know how many voxels does the cerebellum have in a given file?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] How many voxels in the cerebellum?

2017-02-02 Thread Xavier Guell Paradis
This one: HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMAll.dscalar

Thanks,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Thursday, February 02, 2017 11:28 AM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] How many voxels in the cerebellum?

What file?

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, February 2, 2017 at 10:26 AM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] How many voxels in the cerebellum?

Dear HCP experts,
Is there a way to know how many voxels does the cerebellum have in a given file?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] How many voxels in the cerebellum?

2017-02-02 Thread Xavier Guell Paradis
Dear HCP experts,
Is there a way to know how many voxels does the cerebellum have in a given file?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-02-01 Thread Xavier Guell Paradis
Hi Michael,
This has worked wonderfully.
Thank you all for your very helpful messages.
Xavier.

From: Harms, Michael [mha...@wustl.edu]
Sent: Tuesday, January 31, 2017 11:01 AM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Yes.

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Tuesday, January 31, 2017 at 9:59 AM
To: Michael Harms mailto:mha...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Michael,
Thank you so much for all your help. I have seen that for each task, each 
subject has multiple cope1 files (under different cope folders; for example for 
the EMOTION task subject 100206 has the following folders, each with a 
cope1.dtseries.nii file: cope1.feat, cope2.feat, cope3.feat, cope4.feat, 
cope5.feat, cope6.feat). The Contrasts.txt file for EMOTION shows the 
following: FACES, SHAPES, FACES-SHAPES, neg_FACES, neg_SHAPES, SHAPES-FACES.

If I am interested in FACES-SHAPES, should I use the cope1.dtseries.nii file 
that is inside the cope3.feat folder (given that FACES-SHAPES is the third 
contrast listed in the Contrasts.txt file)?

Thank you,
Xavier.

From: Harms, Michael [mha...@wustl.edu<mailto:mha...@wustl.edu>]
Sent: Monday, January 30, 2017 9:11 PM
To: Xavier Guell Paradis; Glasser, Matthew; Elam, Jennifer; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


No, we were proposing this:
M = cifti map of mean(of Level 2 individual subject copes)
S = cifti map of std(of Level 2 individual subject copes)

Cohen’s d = M/S

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu<mailto:mha...@wustl.edu>

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 6:41 PM
To: "Glasser, Matthew" mailto:glass...@wustl.edu>>, Michael 
Harms mailto:mha...@wustl.edu>>, "Elam, Jennifer" 
mailto:e...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

I think this would be done from a level3 file; maybe I'm wrong.
If I understand this correctly, the three steps below would give a Cohen's d 
map. Have I understood it right?:

1st) take this file: 
HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMAll.dscalar.nii
2nd) transform this file into a cope1 file (Michael said he may be able to make 
this file available; "I can make the equivalent “cope1” file from the Level 3 
‘flameo’ available via Box")
3rd) in the cope1 file, do (x-mean)/SD for every data point

Thank you very much,
Xavier.
____
From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Monday, January 30, 2017 7:26 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

I assumed you were talking about level3 files.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 6:25 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, "Harms, 
Michael" mailto:mha...@wustl.edu>>, "Elam, Jennifer" 
mailto:e...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
ma

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-31 Thread Xavier Guell Paradis
Dear Michael,
Thank you so much for all your help. I have seen that for each task, each 
subject has multiple cope1 files (under different cope folders; for example for 
the EMOTION task subject 100206 has the following folders, each with a 
cope1.dtseries.nii file: cope1.feat, cope2.feat, cope3.feat, cope4.feat, 
cope5.feat, cope6.feat). The Contrasts.txt file for EMOTION shows the 
following: FACES, SHAPES, FACES-SHAPES, neg_FACES, neg_SHAPES, SHAPES-FACES.

If I am interested in FACES-SHAPES, should I use the cope1.dtseries.nii file 
that is inside the cope3.feat folder (given that FACES-SHAPES is the third 
contrast listed in the Contrasts.txt file)?

Thank you,
Xavier.

From: Harms, Michael [mha...@wustl.edu]
Sent: Monday, January 30, 2017 9:11 PM
To: Xavier Guell Paradis; Glasser, Matthew; Elam, Jennifer; 
hcp-users@humanconnectome.org
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


No, we were proposing this:
M = cifti map of mean(of Level 2 individual subject copes)
S = cifti map of std(of Level 2 individual subject copes)

Cohen’s d = M/S

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 6:41 PM
To: "Glasser, Matthew" mailto:glass...@wustl.edu>>, Michael 
Harms mailto:mha...@wustl.edu>>, "Elam, Jennifer" 
mailto:e...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

I think this would be done from a level3 file; maybe I'm wrong.
If I understand this correctly, the three steps below would give a Cohen's d 
map. Have I understood it right?:

1st) take this file: 
HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMAll.dscalar.nii
2nd) transform this file into a cope1 file (Michael said he may be able to make 
this file available; "I can make the equivalent “cope1” file from the Level 3 
‘flameo’ available via Box")
3rd) in the cope1 file, do (x-mean)/SD for every data point

Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Monday, January 30, 2017 7:26 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

I assumed you were talking about level3 files.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 6:25 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, "Harms, 
Michael" mailto:mha...@wustl.edu>>, "Elam, Jennifer" 
mailto:e...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Hi Matthew,
I am sorry, I didn't fully understand your previous message. Do you mean that 
the two steps that I mentioned in my last message are correct?
Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Monday, January 30, 2017 7:22 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Those are the level 2 copes.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 6:20 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, "Harms, 
Michael" mailto:mha...@wustl.edu>>, "Elam, Jennifer" 
mailto:e...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@hu

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-30 Thread Xavier Guell Paradis
I think this would be done from a level3 file; maybe I'm wrong.
If I understand this correctly, the three steps below would give a Cohen's d 
map. Have I understood it right?:

1st) take this file: 
HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMAll.dscalar.nii
2nd) transform this file into a cope1 file (Michael said he may be able to make 
this file available; "I can make the equivalent “cope1” file from the Level 3 
‘flameo’ available via Box")
3rd) in the cope1 file, do (x-mean)/SD for every data point

Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Monday, January 30, 2017 7:26 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

I assumed you were talking about level3 files.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 6:25 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, "Harms, 
Michael" mailto:mha...@wustl.edu>>, "Elam, Jennifer" 
mailto:e...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Hi Matthew,
I am sorry, I didn't fully understand your previous message. Do you mean that 
the two steps that I mentioned in my last message are correct?
Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Monday, January 30, 2017 7:22 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Those are the level 2 copes.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 6:20 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, "Harms, 
Michael" mailto:mha...@wustl.edu>>, "Elam, Jennifer" 
mailto:e...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Matthew,
Thank you very much for the suggestion. To make sure that I understand this 
correctly; would this be the correct analysis?:

1st) Obtain a group cope1 file of the S900 group task contrasts (in a previous 
message, Michael said he could make this available from the S900 group task 
contrasts zstat maps: "I can make the equivalent “cope1” file from the Level 3 
‘flameo’ available via Box"
2nd) For each data point of the group cope1 file, calculate (x-mean)/SD. This 
gives a Cohen's d map.

Is this correct?
Thank you,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Monday, January 30, 2017 6:51 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

No because the non-optimal scaling will divide out in the mean(cope)/std(cope) 
ratio.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 4:35 PM
To: "Harms, Michael" mailto:mha...@wustl.edu>>, "Elam, 
Jennifer" mailto:e...@wustl.edu>>, Matt Glasser 
mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Michael,
Wouldn't using the cope files have the problem that "the released versions are 
not optimally scaled (because of a non-optimal intensity bias field 
correction)" (as written by Matthew before in this conversation)? Or would 

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-30 Thread Xavier Guell Paradis
Hi Matthew,
I am sorry, I didn't fully understand your previous message. Do you mean that 
the two steps that I mentioned in my last message are correct?
Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Monday, January 30, 2017 7:22 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Those are the level 2 copes.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 6:20 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, "Harms, 
Michael" mailto:mha...@wustl.edu>>, "Elam, Jennifer" 
mailto:e...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Matthew,
Thank you very much for the suggestion. To make sure that I understand this 
correctly; would this be the correct analysis?:

1st) Obtain a group cope1 file of the S900 group task contrasts (in a previous 
message, Michael said he could make this available from the S900 group task 
contrasts zstat maps: "I can make the equivalent “cope1” file from the Level 3 
‘flameo’ available via Box"
2nd) For each data point of the group cope1 file, calculate (x-mean)/SD. This 
gives a Cohen's d map.

Is this correct?
Thank you,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Monday, January 30, 2017 6:51 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

No because the non-optimal scaling will divide out in the mean(cope)/std(cope) 
ratio.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 4:35 PM
To: "Harms, Michael" mailto:mha...@wustl.edu>>, "Elam, 
Jennifer" mailto:e...@wustl.edu>>, Matt Glasser 
mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Michael,
Wouldn't using the cope files have the problem that "the released versions are 
not optimally scaled (because of a non-optimal intensity bias field 
correction)" (as written by Matthew before in this conversation)? Or would this 
not matter if Cohen's d were calculated from cope1 files?
Thanks,
Xavier.

From: Harms, Michael [mha...@wustl.edu<mailto:mha...@wustl.edu>]
Sent: Monday, January 30, 2017 12:30 PM
To: Elam, Jennifer; Glasser, Matthew; Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Hi,
Just wanted to mention that I’m not promoting the computation of Cohen’s d 
effect size maps by dividing the z-stat maps by sqrt(N) as a “formal” solution. 
 Since the z-stat are computed using ‘flameo’ and multi-level variance 
modeling, I think the “proper” way to compute Cohen’s d effect size maps would 
be from first principles — i.e., the mean (across subjects) divided by the std 
(across subjects), of the Level 2 copes.  And even that might have some issues, 
due to the family structure (resulting in a underestimate of the std across 
subjects).

We’ll give this some thought, and aim to include Cohen’s d effect size maps of 
the task contrasts as part of the group average maps for the S1200 release.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu<mailto:mha...@wustl.edu>

From: "Elam, Jennifer" mailto:e...@wustl.edu>>
Date: Monday, January 30, 2017 at 11:21 AM
To: Mi

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-30 Thread Xavier Guell Paradis
Dear Matthew,
Thank you very much for the suggestion. To make sure that I understand this 
correctly; would this be the correct analysis?:

1st) Obtain a group cope1 file of the S900 group task contrasts (in a previous 
message, Michael said he could make this available from the S900 group task 
contrasts zstat maps: "I can make the equivalent “cope1” file from the Level 3 
‘flameo’ available via Box"
2nd) For each data point of the group cope1 file, calculate (x-mean)/SD. This 
gives a Cohen's d map.

Is this correct?
Thank you,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Monday, January 30, 2017 6:51 PM
To: Xavier Guell Paradis; Harms, Michael; Elam, Jennifer; 
hcp-users@humanconnectome.org
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

No because the non-optimal scaling will divide out in the mean(cope)/std(cope) 
ratio.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Monday, January 30, 2017 at 4:35 PM
To: "Harms, Michael" mailto:mha...@wustl.edu>>, "Elam, 
Jennifer" mailto:e...@wustl.edu>>, Matt Glasser 
mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Michael,
Wouldn't using the cope files have the problem that "the released versions are 
not optimally scaled (because of a non-optimal intensity bias field 
correction)" (as written by Matthew before in this conversation)? Or would this 
not matter if Cohen's d were calculated from cope1 files?
Thanks,
Xavier.

From: Harms, Michael [mha...@wustl.edu<mailto:mha...@wustl.edu>]
Sent: Monday, January 30, 2017 12:30 PM
To: Elam, Jennifer; Glasser, Matthew; Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Hi,
Just wanted to mention that I’m not promoting the computation of Cohen’s d 
effect size maps by dividing the z-stat maps by sqrt(N) as a “formal” solution. 
 Since the z-stat are computed using ‘flameo’ and multi-level variance 
modeling, I think the “proper” way to compute Cohen’s d effect size maps would 
be from first principles — i.e., the mean (across subjects) divided by the std 
(across subjects), of the Level 2 copes.  And even that might have some issues, 
due to the family structure (resulting in a underestimate of the std across 
subjects).

We’ll give this some thought, and aim to include Cohen’s d effect size maps of 
the task contrasts as part of the group average maps for the S1200 release.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu<mailto:mha...@wustl.edu>

From: "Elam, Jennifer" mailto:e...@wustl.edu>>
Date: Monday, January 30, 2017 at 11:21 AM
To: Michael Harms mailto:mha...@wustl.edu>>, "Glasser, 
Matthew" mailto:glass...@wustl.edu>>, Xavier Guell Paradis 
mailto:xavie...@mit.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Posting this off-list thread about computing effect size task maps for the S900 
from the group average zstat maps back to the list in case looking through it 
is of interest to other users. If the discussion continues, please include the 
list address in the responses so others can follow.


Thanks,

Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu<mailto:e...@wustl.edu>
www.humanconnectome.org<http://www.humanconnectome.org/>




From: Harms, Michael
Sent: Sunday, January 29, 2017 10:39 PM
To: Glasser, Matthew; Xavier Guell Paradis
Cc: Burgess, Gregory; Elam, Jennifer
Subject: 

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-30 Thread Xavier Guell Paradis
Dear Michael,
Wouldn't using the cope files have the problem that "the released versions are 
not optimally scaled (because of a non-optimal intensity bias field 
correction)" (as written by Matthew before in this conversation)? Or would this 
not matter if Cohen's d were calculated from cope1 files?
Thanks,
Xavier.

From: Harms, Michael [mha...@wustl.edu]
Sent: Monday, January 30, 2017 12:30 PM
To: Elam, Jennifer; Glasser, Matthew; Xavier Guell Paradis; 
hcp-users@humanconnectome.org
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Hi,
Just wanted to mention that I’m not promoting the computation of Cohen’s d 
effect size maps by dividing the z-stat maps by sqrt(N) as a “formal” solution. 
 Since the z-stat are computed using ‘flameo’ and multi-level variance 
modeling, I think the “proper” way to compute Cohen’s d effect size maps would 
be from first principles — i.e., the mean (across subjects) divided by the std 
(across subjects), of the Level 2 copes.  And even that might have some issues, 
due to the family structure (resulting in a underestimate of the std across 
subjects).

We’ll give this some thought, and aim to include Cohen’s d effect size maps of 
the task contrasts as part of the group average maps for the S1200 release.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: "Elam, Jennifer" mailto:e...@wustl.edu>>
Date: Monday, January 30, 2017 at 11:21 AM
To: Michael Harms mailto:mha...@wustl.edu>>, "Glasser, 
Matthew" mailto:glass...@wustl.edu>>, Xavier Guell Paradis 
mailto:xavie...@mit.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Posting this off-list thread about computing effect size task maps for the S900 
from the group average zstat maps back to the list in case looking through it 
is of interest to other users. If the discussion continues, please include the 
list address in the responses so others can follow.


Thanks,

Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu<mailto:e...@wustl.edu>
www.humanconnectome.org<http://www.humanconnectome.org/>



____________
From: Harms, Michael
Sent: Sunday, January 29, 2017 10:39 PM
To: Glasser, Matthew; Xavier Guell Paradis
Cc: Burgess, Gregory; Elam, Jennifer
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Or perhaps, rather than showing p=0.05 as a contour, show the Cohen’s d = XX 
contour, to get away from the problem where, with 787 subjects, huge chunks of 
cortex will have p<0.05, even if the Cohen’s d effect size is tiny in many 
places.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu<mailto:mha...@wustl.edu>

From: "Glasser, Matthew" mailto:glass...@wustl.edu>>
Date: Saturday, January 28, 2017 at 1:50 PM
To: Michael Harms mailto:mha...@wustl.edu>>, Xavier Guell 
Paradis mailto:xavie...@mit.edu>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>, "Elam, 
Jennifer" mailto:e...@wustl.edu>>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Significance is a quantification of “likelihood of truth” whereas effect size 
is a quantification of “importance.”   With large numbers of subjects, the 
uncertainty of the measure diminishes and thus it can be considered “true” in 
the absence of confounds, but that does not say whether we should care about it 
or not.  As Mike says, it is unclear that there is a threshold of effect size 
that is anything other than arbitrary (just as with significance we by 
convention set an arbitrary threshold of p=0.05) and application dependent.  
With

[HCP-Users] How to know cluster size (number of voxels) after -cifti-find-clusters

2017-01-30 Thread Xavier Guell Paradis
Dear HCP experts,
After using -cifti-find-clusters, is there a way to know the size of the 
clusters that the command has found? We know that the clusters will be larger 
than the specified volume-value-threshold, but is there a way to know the mm^3 
or number of voxels of the clusters identified?

Thank you very much,
Xavier.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-30 Thread Xavier Guell Paradis
Dear Michael and Matthew,
Thank you very much for all your replies, your comments were extremely helpful.

Xavier.

From: Harms, Michael [mha...@wustl.edu]
Sent: Monday, January 30, 2017 12:30 PM
To: Elam, Jennifer; Glasser, Matthew; Xavier Guell Paradis; 
hcp-users@humanconnectome.org
Cc: Burgess, Gregory
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Hi,
Just wanted to mention that I’m not promoting the computation of Cohen’s d 
effect size maps by dividing the z-stat maps by sqrt(N) as a “formal” solution. 
 Since the z-stat are computed using ‘flameo’ and multi-level variance 
modeling, I think the “proper” way to compute Cohen’s d effect size maps would 
be from first principles — i.e., the mean (across subjects) divided by the std 
(across subjects), of the Level 2 copes.  And even that might have some issues, 
due to the family structure (resulting in a underestimate of the std across 
subjects).

We’ll give this some thought, and aim to include Cohen’s d effect size maps of 
the task contrasts as part of the group average maps for the S1200 release.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: "Elam, Jennifer" mailto:e...@wustl.edu>>
Date: Monday, January 30, 2017 at 11:21 AM
To: Michael Harms mailto:mha...@wustl.edu>>, "Glasser, 
Matthew" mailto:glass...@wustl.edu>>, Xavier Guell Paradis 
mailto:xavie...@mit.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Posting this off-list thread about computing effect size task maps for the S900 
from the group average zstat maps back to the list in case looking through it 
is of interest to other users. If the discussion continues, please include the 
list address in the responses so others can follow.


Thanks,

Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu<mailto:e...@wustl.edu>
www.humanconnectome.org<http://www.humanconnectome.org/>



________
From: Harms, Michael
Sent: Sunday, January 29, 2017 10:39 PM
To: Glasser, Matthew; Xavier Guell Paradis
Cc: Burgess, Gregory; Elam, Jennifer
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?


Or perhaps, rather than showing p=0.05 as a contour, show the Cohen’s d = XX 
contour, to get away from the problem where, with 787 subjects, huge chunks of 
cortex will have p<0.05, even if the Cohen’s d effect size is tiny in many 
places.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu<mailto:mha...@wustl.edu>

From: "Glasser, Matthew" mailto:glass...@wustl.edu>>
Date: Saturday, January 28, 2017 at 1:50 PM
To: Michael Harms mailto:mha...@wustl.edu>>, Xavier Guell 
Paradis mailto:xavie...@mit.edu>>
Cc: "Burgess, Gregory" mailto:gburg...@wustl.edu>>, "Elam, 
Jennifer" mailto:e...@wustl.edu>>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Significance is a quantification of “likelihood of truth” whereas effect size 
is a quantification of “importance.”   With large numbers of subjects, the 
uncertainty of the measure diminishes and thus it can be considered “true” in 
the absence of confounds, but that does not say whether we should care about it 
or not.  As Mike says, it is unclear that there is a threshold of effect size 
that is anything other than arbitrary (just as with significance we by 
convention set an arbitrary threshold of p=0.05) and application dependent.  
With a properly normalized beta map, you can see which areas are strongly 
different from each other (and therefore have strong gradients between them), 
however even gradient strength is a continuous number.

This is one 

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-27 Thread Xavier Guell Paradis
Thank you again for the reply.
Is there a way to access data that was produced but not packaged up?

Thank you for your help,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Friday, January 27, 2017 3:21 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

It was produced, whether or not it was packaged up is a separate matter.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Friday, January 27, 2017 at 2:18 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Matt,
Thanks for the reply. Did you mean that the pipelines produce group average 
effect size map and that these must be somewhere as part of the HCP public 
data; or were you referring to the individual subject effect size maps?
Thanks,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Friday, January 27, 2017 3:01 PM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

I assume these are available somewhere because the pipelines produce them, but 
I didn’t make these group average results.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Friday, January 27, 2017 at 10:04 AM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Matt,
Thank you again for your reply.
I have been able to find cope1 files for single subject task contrasts (e.g. 
cope1 file for working memory contrasts of subject 996782), but not for the 
S900 group (e.g. I have not been able to find a cope1 file for the S900 group 
for working memory contrasts).

I was wondering:
a) Is there any task contrast effect size map available for the S900 group? 
(even if they are not optimally scaled)
b) If not, would it be possible to generate a task contrast effect size map by 
using available S900 group data (e.g. the task contrasts zstat maps of the S900 
group), or would it be necessary to go back to the data of each individual 
subject?
c) If it is necessary to go back to the data of each individual subject, which 
approach would you suggest to combine all cope1 files of each subject of the 
S900 group into one effect size map of all subjects? Would something like 
normalizing the cope1 file of each subject (using wb_command as written below) 
and then averaging all normalized cope1 files work? Or would something as 
simple as averaging all cope1 files work?

wb_command -cifti-reduce  MEAN mean.dtseries.nii
wb_command -cifti-reduce  STDEV stdev.dtseries.nii
wb_command -cifti-math '(x - mean) / stdev'  -fixnan 0 -var x  
-var mean mean.dtseries.nii -select 1 1 -repeat -var stdev stdev.dtseries.nii 
-select 1 1 -repeat


Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Thursday, January 26, 2017 6:53 PM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

The files called cope1 or beta are an effect size measure, however the released 
versions are not optimally scaled (because of a non-optimal intensity bias 
field correction).  We plan to correct this in the future.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, January 26, 2017 at 5:41 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Matt,
Thank you very much for your very helpful reply.
I will have to investigate this topic more, but is there any approach you would 
suggest to obtain effect size maps from the S900 group HCP data? I was 
wondering 

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-27 Thread Xavier Guell Paradis
Dear Matt,
Thanks for the reply. Did you mean that the pipelines produce group average 
effect size map and that these must be somewhere as part of the HCP public 
data; or were you referring to the individual subject effect size maps?
Thanks,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Friday, January 27, 2017 3:01 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

I assume these are available somewhere because the pipelines produce them, but 
I didn’t make these group average results.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Friday, January 27, 2017 at 10:04 AM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Matt,
Thank you again for your reply.
I have been able to find cope1 files for single subject task contrasts (e.g. 
cope1 file for working memory contrasts of subject 996782), but not for the 
S900 group (e.g. I have not been able to find a cope1 file for the S900 group 
for working memory contrasts).

I was wondering:
a) Is there any task contrast effect size map available for the S900 group? 
(even if they are not optimally scaled)
b) If not, would it be possible to generate a task contrast effect size map by 
using available S900 group data (e.g. the task contrasts zstat maps of the S900 
group), or would it be necessary to go back to the data of each individual 
subject?
c) If it is necessary to go back to the data of each individual subject, which 
approach would you suggest to combine all cope1 files of each subject of the 
S900 group into one effect size map of all subjects? Would something like 
normalizing the cope1 file of each subject (using wb_command as written below) 
and then averaging all normalized cope1 files work? Or would something as 
simple as averaging all cope1 files work?

wb_command -cifti-reduce  MEAN mean.dtseries.nii
wb_command -cifti-reduce  STDEV stdev.dtseries.nii
wb_command -cifti-math '(x - mean) / stdev'  -fixnan 0 -var x  
-var mean mean.dtseries.nii -select 1 1 -repeat -var stdev stdev.dtseries.nii 
-select 1 1 -repeat


Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Thursday, January 26, 2017 6:53 PM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

The files called cope1 or beta are an effect size measure, however the released 
versions are not optimally scaled (because of a non-optimal intensity bias 
field correction).  We plan to correct this in the future.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, January 26, 2017 at 5:41 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Matt,
Thank you very much for your very helpful reply.
I will have to investigate this topic more, but is there any approach you would 
suggest to obtain effect size maps from the S900 group HCP data? I was 
wondering if the zstat data of the S900 group task contrasts could be converted 
to effect size values without having to go back to the individual subjects.

Thank you very much,
Xavier.


From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Thursday, January 26, 2017 5:33 PM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Standard error scales with sample size, standard deviation does not.  Things 
like Z, t, and p all also scale with sample size and are measures of 
statistical significance via various transformations.  Thus for a large group 
of subjects, Z and t will be very high and p will be very low.  Z, t and p are 
thus all not biologically interpretable, as their values also depend on the 
amount and quality of the data.  In the limit with infinite amounts of data, 
the entire brain will be

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-27 Thread Xavier Guell Paradis
Dear Matt,
Thank you again for your reply.
I have been able to find cope1 files for single subject task contrasts (e.g. 
cope1 file for working memory contrasts of subject 996782), but not for the 
S900 group (e.g. I have not been able to find a cope1 file for the S900 group 
for working memory contrasts).

I was wondering:
a) Is there any task contrast effect size map available for the S900 group? 
(even if they are not optimally scaled)
b) If not, would it be possible to generate a task contrast effect size map by 
using available S900 group data (e.g. the task contrasts zstat maps of the S900 
group), or would it be necessary to go back to the data of each individual 
subject?
c) If it is necessary to go back to the data of each individual subject, which 
approach would you suggest to combine all cope1 files of each subject of the 
S900 group into one effect size map of all subjects? Would something like 
normalizing the cope1 file of each subject (using wb_command as written below) 
and then averaging all normalized cope1 files work? Or would something as 
simple as averaging all cope1 files work?

wb_command -cifti-reduce  MEAN mean.dtseries.nii
wb_command -cifti-reduce  STDEV stdev.dtseries.nii
wb_command -cifti-math '(x - mean) / stdev'  -fixnan 0 -var x  
-var mean mean.dtseries.nii -select 1 1 -repeat -var stdev stdev.dtseries.nii 
-select 1 1 -repeat


Thank you very much,
Xavier.

From: Glasser, Matthew [glass...@wustl.edu]
Sent: Thursday, January 26, 2017 6:53 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

The files called cope1 or beta are an effect size measure, however the released 
versions are not optimally scaled (because of a non-optimal intensity bias 
field correction).  We plan to correct this in the future.

Peace,

Matt.

From: Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, January 26, 2017 at 5:41 PM
To: Matt Glasser mailto:glass...@wustl.edu>>, 
"hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: RE: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear Matt,
Thank you very much for your very helpful reply.
I will have to investigate this topic more, but is there any approach you would 
suggest to obtain effect size maps from the S900 group HCP data? I was 
wondering if the zstat data of the S900 group task contrasts could be converted 
to effect size values without having to go back to the individual subjects.

Thank you very much,
Xavier.


From: Glasser, Matthew [glass...@wustl.edu<mailto:glass...@wustl.edu>]
Sent: Thursday, January 26, 2017 5:33 PM
To: Xavier Guell Paradis; 
hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Standard error scales with sample size, standard deviation does not.  Things 
like Z, t, and p all also scale with sample size and are measures of 
statistical significance via various transformations.  Thus for a large group 
of subjects, Z and t will be very high and p will be very low.  Z, t and p are 
thus all not biologically interpretable, as their values also depend on the 
amount and quality of the data.  In the limit with infinite amounts of data, 
the entire brain will be significant for any task, but wether a region is 
statistically significant tells us little about its importance functionally.  
Measures like appropriately scaled GLM regression betas, %BOLD change, or 
Cohen’s d are biologically interpretable measures of effect size because their 
values should not change as sample size and data amount go up (rather the 
uncertainty on their estimates goes down).  Regions with a large effect size in 
a task are likely important to that task (and will probably also meet criteria 
for statistical significance given a reasonable amount of data).

A common problem in neuroimaging studies is showing thresholded statistical 
significance maps rather than effect size maps (ideally unthresholded with an 
indication of which portions meet tests of statistical significance), and in 
general focusing on statistically significant blobs rather than the effect size 
in identifiable brain areas (which should often show stepwise changes in 
activity at their borders).

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, January 26, 2017 at 3:46 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@human

Re: [HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-26 Thread Xavier Guell Paradis
Dear Matt,
Thank you very much for your very helpful reply.
I will have to investigate this topic more, but is there any approach you would 
suggest to obtain effect size maps from the S900 group HCP data? I was 
wondering if the zstat data of the S900 group task contrasts could be converted 
to effect size values without having to go back to the individual subjects.

Thank you very much,
Xavier.


From: Glasser, Matthew [glass...@wustl.edu]
Sent: Thursday, January 26, 2017 5:33 PM
To: Xavier Guell Paradis; hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Standard error scales with sample size, standard deviation does not.  Things 
like Z, t, and p all also scale with sample size and are measures of 
statistical significance via various transformations.  Thus for a large group 
of subjects, Z and t will be very high and p will be very low.  Z, t and p are 
thus all not biologically interpretable, as their values also depend on the 
amount and quality of the data.  In the limit with infinite amounts of data, 
the entire brain will be significant for any task, but wether a region is 
statistically significant tells us little about its importance functionally.  
Measures like appropriately scaled GLM regression betas, %BOLD change, or 
Cohen’s d are biologically interpretable measures of effect size because their 
values should not change as sample size and data amount go up (rather the 
uncertainty on their estimates goes down).  Regions with a large effect size in 
a task are likely important to that task (and will probably also meet criteria 
for statistical significance given a reasonable amount of data).

A common problem in neuroimaging studies is showing thresholded statistical 
significance maps rather than effect size maps (ideally unthresholded with an 
indication of which portions meet tests of statistical significance), and in 
general focusing on statistically significant blobs rather than the effect size 
in identifiable brain areas (which should often show stepwise changes in 
activity at their borders).

Peace,

Matt.

From: 
mailto:hcp-users-boun...@humanconnectome.org>>
 on behalf of Xavier Guell Paradis mailto:xavie...@mit.edu>>
Date: Thursday, January 26, 2017 at 3:46 PM
To: "hcp-users@humanconnectome.org<mailto:hcp-users@humanconnectome.org>" 
mailto:hcp-users@humanconnectome.org>>
Subject: [HCP-Users] Very large z values for task contrasts in 
S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical 
significance?

Dear HCP team,
I have seen that the zstat values for tasks contrasts are very large in the 
HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMAll.dscalar.nii file, to 
the point that one can observe areas of activation in task contrasts by setting 
very high z value thresholds (e.g., a z threshold of +14).
I think (please correct me if I'm wrong) that the z values of the S900 file are 
very large because the group is very large, therefore the standard deviation is 
very small (given that there will be less variability in a group if one takes a 
very large group of people rather than a small group of people), and if the 
standard deviation is very small then even small differences from the mean will 
lead to very large z values.

I was wondering what implication does this have in terms of statistical 
significance. A z value of 14 or larger would correspond to an extremely small 
p value, i.e. it would be extremely unlikely to observe by chance a measure 
which is 14 times the standard deviation away from the mean. Would it therefore 
be correct to assume that the areas that we can observe in the S900 
tfMRI_ALLTASKS task contrasts with a very high zstat threshold (e.g., 14) are 
statistically significant, without having to worry about multiple comparisons 
or family structure?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org<mailto:HCP-Users@humanconnectome.org>
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Very large z values for task contrasts in S900_ALLTASKS_level3_zstat file: what does this mean in terms of statistical significance?

2017-01-26 Thread Xavier Guell Paradis
Dear HCP team,
I have seen that the zstat values for tasks contrasts are very large in the 
HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMAll.dscalar.nii file, to 
the point that one can observe areas of activation in task contrasts by setting 
very high z value thresholds (e.g., a z threshold of +14).
I think (please correct me if I'm wrong) that the z values of the S900 file are 
very large because the group is very large, therefore the standard deviation is 
very small (given that there will be less variability in a group if one takes a 
very large group of people rather than a small group of people), and if the 
standard deviation is very small then even small differences from the mean will 
lead to very large z values.

I was wondering what implication does this have in terms of statistical 
significance. A z value of 14 or larger would correspond to an extremely small 
p value, i.e. it would be extremely unlikely to observe by chance a measure 
which is 14 times the standard deviation away from the mean. Would it therefore 
be correct to assume that the areas that we can observe in the S900 
tfMRI_ALLTASKS task contrasts with a very high zstat threshold (e.g., 14) are 
statistically significant, without having to worry about multiple comparisons 
or family structure?

Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Seed-based RS functional connectivity for individual subject

2017-01-03 Thread Xavier Guell Paradis
Dear HCP team,
I am trying to calculate resting-state functional connectivity for subject 
100307 using a seed in the right cerebral cortex that I have defined using a 
ROI metric file. I have tried the following -cifti-correlation code:

/Applications/workbench/bin_macosx64/wb_command -cifti-correlation 
/Users/xavierguell/Desktop/MRIFILES/HCPDATA/100307REST/MNINonLinear/Results/rfMRI_REST1_LR/rfMRI_REST1_LR_Atlas_MSMAll_hp2000_clean.dtseries.nii
 
/Users/xavierguell/Desktop/MRIFILES/HCPDATA/100307REST/UCTconn1003072BKseed.dconn.nii
 -roi-override -right-roi 
/Users/xavierguell/Desktop/MRIFILES/HCPDATA/HCP_S900_GroupAvg_v1/UCTconn1003072BKROI.func.gii

The output file (UCTconn1003072BKseed.dconn.nii) is generated and I can open 
the file in workbench view. However, the file in workbench view does not show 
color in any brain area. Also, workbench view crashes very frequently after 
loading the UCTconn1003072BKseed.dconn.nii file.

I would be grateful if you could tell me whether there is something wrong in 
the code I wrote.

Thank you very much,
Xavier.

Xavier Guell Paradis, M.D.
Visiting Scientist
Massachusetts Institute of Technology
McGovern Institute for Brain Research
Office: 46-4033A
Phone: (617) 324-4355
Email: xavie...@mit.edu


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Problem downloading data: "Destination path directory does not exist (Code: 6)"

2017-01-03 Thread Xavier Guell Paradis
It works now, thank you! I had changed the name of the HCP data folder in my 
desktop. I re-browsed the folder and now it works fine.
Thank you very much,
Xavier.

From: Burgess, Gregory [gburg...@wustl.edu]
Sent: Tuesday, January 03, 2017 11:39 AM
To: Xavier Guell Paradis
Cc: HCP-Users@humanconnectome.org
Subject: Re: [HCP-Users] Problem downloading data: "Destination path directory 
does not exist (Code: 6)"

The error may be referring to the location specified in Aspera Connect > 
Preferences > Transfers > Save downloaded files to:

[cid:4A4CB0FE-ED27-4CD2-A866-FFEE544CC7FB@wucon.wustl.edu]
Does that location in your preferences exist?

--Greg


Greg Burgess, Ph.D.
Staff Scientist, Human Connectome Project
Washington University School of Medicine
Department of Psychiatry
Phone: 314-362-7864
Email: gburg...@wustl.edu<mailto:gburg...@wustl.edu>

On Jan 3, 2017, at 9:31 AM, Xavier Guell Paradis 
mailto:xavie...@mit.edu>> wrote:

Dear HCP team,
I am trying to download individual subject data (100307_3T_tfMRI_MOTOR_analysis 
and 100307_3T_tfMRI_WM_analysis). The following message appears in the Aspera 
Connect window: "Error: Destination path directory does not exist (Code: 6)".

Is there any solution?

Thank you very much,
Xavier.

Xavier Guell Paradis, M.D.
Visiting Scientist
Massachusetts Institute of Technology
McGovern Institute for Brain Research
Office: 46-4033A
Phone: (617) 324-4355
Email: xavie...@mit.edu<mailto:xavie...@mit.edu>
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Problem downloading data: "Destination path directory does not exist (Code: 6)"

2017-01-03 Thread Xavier Guell Paradis
Dear HCP team,
I am trying to download individual subject data (100307_3T_tfMRI_MOTOR_analysis 
and 100307_3T_tfMRI_WM_analysis). The following message appears in the Aspera 
Connect window: "Error: Destination path directory does not exist (Code: 6)".

Is there any solution?

Thank you very much,
Xavier.

Xavier Guell Paradis, M.D.
Visiting Scientist
Massachusetts Institute of Technology
McGovern Institute for Brain Research
Office: 46-4033A
Phone: (617) 324-4355
Email: xavie...@mit.edu

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Continuous border drawing holding "alt/option" key does not work

2016-12-29 Thread Xavier Guell Paradis
Dear HCP team,
I am using wb_view version 1.2.3. I have noticed that it is not possible to 
draw a continuous line by holding down the alt/option key and the left mouse 
button while dragging the mouse. Drawing a border is possible only by drawing 
multiple points.
Is this a known issue, or am I doing something wrong in my computer?
Thank you very much,
Xavier.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Threshold p value in workbench view

2016-12-28 Thread Xavier Guell Paradis
I think I cannot access this link 
https://wustl.box.com/s/uuxn06pjlnn6ts7nhak0zww53l33fxzf ; the following 
message appears: "This shared file or folder link has been removed or is 
unavailable to you. ". Is there an alternative way to access the information in 
this link?

As for this image 
(https://drive.google.com/file/d/0B76s53K6sr4Qa0Z6MHB3OXRxbW8/view?usp=sharing),
 wouldn't this show z values from 0 to 0.05 (instead of p values from 0 to 
0.05) if the file I am using is the "tfMRI_ALLTASKS_level3_zstat1" file? I 
guess it is first necessary to convert z values to p values using the 
wb_command. Is this explained in the wustl.box link that you sent?

Thank you,
Xavier.

From: Dierker, Donna [do...@wustl.edu]
Sent: Wednesday, December 28, 2016 10:11 AM
To: Xavier Guell Paradis
Subject: Re: [HCP-Users] Threshold p value in workbench view

Does this link work?

https://drive.google.com/file/d/0B76s53K6sr4Qa0Z6MHB3OXRxbW8/view?usp=sharing


> On Dec 27, 2016, at 5:04 PM, Dierker, Donna  wrote:
>
> See if you can read this link, Xavier:
>
> https://wustl.box.com/s/uuxn06pjlnn6ts7nhak0zww53l33fxzf
>
> People also use metric- or cifti-math to convert to log p or 1-p.  Whatever 
> works for you.
>
>
>> On Dec 27, 2016, at 10:39 AM, Xavier Guell Paradis  wrote:
>>
>> Dear Sir or Madam,
>> When viewing task contrasts in workbench (the file is 
>> HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMSulc.dscalar.nii), is 
>> there a way to threshold areas of activation for a given p value (e.g. areas 
>> that, in the task contrast, have achieved a p<0.001)?
>> A previous post in this mailing list asked the same question (the post was 
>> called "thresholding the data to p-value"), but the person that replied to 
>> the email included a screen capture which is not available in the mail 
>> archive. Without that screen capture, I cannot understand how to threshold 
>> data using p values.
>>
>> Thank you very much,
>> Xavier.
>>
>>
>> Xavier Guell Paradis, M.D.
>> Visiting Scientist
>> Massachusetts Institute of Technology
>> McGovern Institute for Brain Research
>> Office: 46-4033A
>> Phone: (617) 324-4355
>> Email: xavie...@mit.edu
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>
>
> 
> The materials in this message are private and may contain Protected 
> Healthcare Information or other information of a sensitive nature. If you are 
> https://drive.google.com/file/d/0B76s53K6sr4Qa0Z6MHB3OXRxbW8/view?usp=sharingnot
>  the intended recipient, be advised that any unauthorized use, disclosure, 
> copying or the taking of any action in reliance on the contents of this 
> information is strictly prohibited. If you have received this email in error, 
> please immediately notify the sender via telephone or return mail.
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Threshold p value in workbench view

2016-12-27 Thread Xavier Guell Paradis
Dear Sir or Madam,
When viewing task contrasts in workbench (the file is 
HCP_S900_787_tfMRI_ALLTASKS_level3_zstat1_hp200_s2_MSMSulc.dscalar.nii), is 
there a way to threshold areas of activation for a given p value (e.g. areas 
that, in the task contrast, have achieved a p<0.001)?
A previous post in this mailing list asked the same question (the post was 
called "thresholding the data to p-value"), but the person that replied to the 
email included a screen capture which is not available in the mail archive. 
Without that screen capture, I cannot understand how to threshold data using p 
values.

Thank you very much,
Xavier.


Xavier Guell Paradis, M.D.
Visiting Scientist
Massachusetts Institute of Technology
McGovern Institute for Brain Research
Office: 46-4033A
Phone: (617) 324-4355
Email: xavie...@mit.edu

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Workbench tutorial data download: "No such file or directory"

2016-12-23 Thread Xavier Guell Paradis
Dear HCP team,
When trying to download the "Connectome Workbench Tutorial Data" file (from 
this page https://db.humanconnectome.org/data/projects/HCP_900) I get the 
following error message in the Aspera Connect window:
"Error: Server aborted session: No such file or directory (Code: 43)".

The same thing happens when I try to download the "900 Subjects Group Average 
Data" or the "500 Subjects Group Average Data" or the "820 Subjects, MSM-All 
Registered, Dense Connectome".
How could I solve this?
Thank you very much,
Xavier.


Xavier Guell Paradis, M.D.
Postdoctoral Fellow
Massachusetts Institute of Technology
McGovern Institute for Brain Research
Office: 46-4033A
Phone: (617) 324-4355
Email: xavie...@mit.edu

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users