[HCP-Users] Infinite values in Group average data

2017-05-15 Thread Sang-Yun Oh
I downloaded group average functional correlation
file: HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii

Some diagonal elements of the square matrix (91282x91282) are infinite
(Please see below).

I want to use this matrix in ananalysis; however, I am not sure how to
understand or deal with infinite diagonal values.

I appreciate any insight

Thanks,
Sang

==

In [1]: import nibabel

In [2]: asdf =
nibabel.load('HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii')

In [3]: img = asdf.get_data()

In [4]: img.shape
Out[4]: (1, 1, 1, 1, 91282, 91282)

In [5]: S = img[0,0,0,0,:,:]

In [6]: S
Out[6]:
memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
  1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
   [  1.96847185e-01,  inf,   3.36383432e-01, ...,
 -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
   [  1.66294336e-01,   3.36383432e-01,  inf, ...,
 -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
   ...,
   [  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
 inf,   1.91883039e+00,   9.20160294e-01],
   [  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
  1.91883111e+00,   8.31776619e+00,   8.82132888e-01],
   [  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
  9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
dtype=float32)

In [7]: S.diagonal()
Out[7]:
memmap([ 8.66434002, inf, inf, ..., inf,
8.31776619,  8.66434002], dtype=float32)

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Subjects who have 7T acquired

2017-05-15 Thread Shruti Narasimham
Hello,

I read that out of the 1200 subjects, 184 have 7T data acquired in
addition to 3T. I tried searching for more information on these 184
but couldn't find any. Is there any information about their subject
IDs or is manually sifting through each one of them the only way to
know which subjects have 7T data?


Thank you.

Regards,
Shruti

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Clarification request for data codes

2017-05-15 Thread Elam, Jennifer
Hi Brittany,

Sorry for the confusion. The codes for the "Menstrual_CycleLength", Average 
length of participant's menstrual cycles in days data are 1=Less than 25 days, 
2=Between 25-35 days, and 3=More than every 35 days.


Best,

Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org



From: hcp-users-boun...@humanconnectome.org 
 on behalf of Brittany Hawkshead 

Sent: Friday, May 12, 2017 12:51:47 AM
To: hcp-users@humanconnectome.org
Subject: [HCP-Users] Clarification request for data codes

Hi HCP Users!

I had a quick follow-up question regarding the codes for the 
"Menstrual_CycleLength" data - I noticed that the Data Dictionary states that 
the codes indicate the "Average length of participant's menstrual cycles in 
days. (Asked of female participants only)."

However, it looks like the data may be organized categorically (as participants 
seem to only have a 1, 2, or 3 in that column), rather than a specific number 
of days (e.g., per an expected 24-35 day cycle).

Any additional clarification would be greatly appreciated! Thank you in advance!

Best,
Brittany



hcp-users@humanconnectome.org


--
Brittany Hawkshead, B.A.
Clinical Program
Clinical Neuroscience Lab (509)
Department of Psychology
University of Georgia
bhawksh...@gmail.com
bha...@uga.edu

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-15 Thread Irisqql0922
Dear hcp teams,


I sorry to bother you again with same problem.


I used default options and mounted data successfully. But when I checked 
/s3/hcp, I found that data in it has only 900 subjects. Obviously, it's not the 
latest 1200-release data. 




Since I want to analyse the latest version of data, I use s3fs to achieve my 
goal. 
I use command:
: > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
s3fs hcp-openaccess /s3mnt -o passwd_file=~/.passwd-s3fs


It failed everytime. In the syslog file, I found error below:




I got my credential keys from connectome DB, and I quiet sure that I put it 
right in passwd-s3fs.


So I wonder, does my credential keys have access to hcp-openaccess when using 
s3fs to mount data? If the answer is yes, do you have any suggestion for me? 


(note:  At first, I thought the problem may due to the  version of s3fs. So I 
created a new instance based on Amazon Linux AMI, and then download the lastest 
version of s3fs. But still, I failed because 'invalid credentials')


thank you very much!


Best,


Qinqin Li


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Parcellation of subcortical areas

2017-05-15 Thread Guy Hwang
Hello,


I have been using wb_command -cifti-parcellate to extract time series using 
Q1-Q6_RelatedParcellation210.L.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii 
(and R)


For subcortical areas, I see the border files, but not the dlabel.nii files on 
https://balsa.wustl.edu/study/show/RVVG. Can you help me find these? Or is 
there a way to extract timeseries from subcortical areas using other commands?


Thank you,

Guy

[https://balsa.wustl.edu/scene/image/4mlX]

A Multi-modal Parcellation of Human Cerebral 
Cortex
balsa.wustl.edu
SPECIES: Human DESCRIPTION: Understanding the amazingly complex human cerebral 
cortex requires a map (or parcellation) of its major subdivisions, known as 
cortical areas.





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-15 Thread Timothy Coalson
On Mon, May 15, 2017 at 8:48 AM, Irisqql0922  wrote:

> ...
> I use command:
> : > ~/.passwd-s3fs
>

If this is really the command you used, then it wouldn't work: you need
"echo" at the start of it, like this:

echo : > ~/.passwd-s3fs

Please look at the file's contents with something like "less
~/.passwd-s3fs" to double check that it contains what you expect (but don't
post it to the list if it contains your secret key, obviously).

Tim

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Parcellation of subcortical areas

2017-05-15 Thread Timothy Coalson
The HCP-MMP1.0 does not contain subcortical parcellations.  This would be
future work.  Others may be able to point you to some
cerebellum/subcortical parcellations from other works.

As for "borders": subcortical areas, which are represented as volume-based
structures, cannot have their boundaries represented as border files,
because border files are only for surface-based structures (currently, just
cerebral cortex).  The most similar concept to borders for volume
structures (as opposed to the sheetlike cortex) would be surfaces (the
boundary of a 3D object is a 2D surface).  This makes sense, as the
cortical surfaces we use are derived from segmenting the cortex out of the
volume.

Tim


On Mon, May 15, 2017 at 11:48 AM, Guy Hwang  wrote:

> Hello,
>
>
> I have been using wb_command -cifti-parcellate to extract time
> series using 
> Q1-Q6_RelatedParcellation210.L.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii
> (and R)
>
>
> For subcortical areas, I see the border files, but not the dlabel.nii
> files on https://balsa.wustl.edu/study/show/RVVG. Can you help me find
> these? Or is there a way to extract timeseries from subcortical areas using
> other commands?
>
>
> Thank you,
>
>
> Guy
> 
> A Multi-modal Parcellation of Human Cerebral Cortex
> 
> balsa.wustl.edu
> SPECIES: Human DESCRIPTION: Understanding the amazingly complex human
> cerebral cortex requires a map (or parcellation) of its major subdivisions,
> known as cortical areas.
>
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Infinite values in Group average data

2017-05-15 Thread Timothy Coalson
Per the name "zcorr", the correlation values have been z-transformed
(fisher's small z transform).  I am somewhat confused as to why some
elements in the diagonal are not infinite.  The "true" value of applying
this transform would be infinite on the entire diagonal, as arctanh(1) is
infinite.  I am guessing this result was generated in matlab, as wb_command
actually prevents infinities when using the z transform, putting a cap on
the correlation (when not using z-transform, it shows correlations of 1 as
expected).

Whatever analysis you do with correlation matrices like this should ignore
the diagonal anyway, since it is correlation to itself.

Tim


On Mon, May 15, 2017 at 3:57 AM, Sang-Yun Oh  wrote:

> I downloaded group average functional correlation file: HCP_S900_820_rfMRI_
> MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii
>
> Some diagonal elements of the square matrix (91282x91282) are infinite
> (Please see below).
>
> I want to use this matrix in ananalysis; however, I am not sure how to
> understand or deal with infinite diagonal values.
>
> I appreciate any insight
>
> Thanks,
> Sang
>
> ==
>
> In [1]: import nibabel
>
> In [2]: asdf = nibabel.load('HCP_S900_820_rfMRI_MSMAll_groupPCA_
> d4500ROW_zcorr.dconn.nii')
>
> In [3]: img = asdf.get_data()
>
> In [4]: img.shape
> Out[4]: (1, 1, 1, 1, 91282, 91282)
>
> In [5]: S = img[0,0,0,0,:,:]
>
> In [6]: S
> Out[6]:
> memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
>   1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
>[  1.96847185e-01,  inf,   3.36383432e-01, ...,
>  -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
>[  1.66294336e-01,   3.36383432e-01,  inf, ...,
>  -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
>...,
>[  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
>  inf,   1.91883039e+00,   9.20160294e-01],
>[  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
>   1.91883111e+00,   8.31776619e+00,   8.82132888e-01],
>[  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
>   9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
> dtype=float32)
>
> In [7]: S.diagonal()
> Out[7]:
> memmap([ 8.66434002, inf, inf, ..., inf,
> 8.31776619,  8.66434002], dtype=float32)
>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Reminder to Register for HCP Course 2017

2017-05-15 Thread Elam, Jennifer
Only a few spaces remain for the 2017 HCP Course: "Exploring the Human 
Connectome". For more info and to register: 
https://store.humanconnectome.org/courses/2017/exploring-the-human-connectome.php

Don't forget to also reserve your accommodation for the course by May 17, 2017 
to be sure of securing a room on the UBC campus in the HCP Course room block.

Before or after May 17, 2017, please use this link to make a reservation: 
https://reserve.ubcconferences.com/vancouver/availability.asp?hotelCode=%2A=06%2F18%2F2017=06%2F23%2F2017=1==1=invBlockCode=G170618A

HCP Course 2017 will be held June 19-23 (week before OHBM) at the Djavad 
Mowafagian Centre for Brain Health at University of British Columbia (UBC) in 
Vancouver, BC, Canada. The 5-day intensive course of lectures and hands-on 
practicals is a great opportunity to learn directly from HCP investigators 
about using HCP data, acquisition, processing methods, and software tools in 
your own studies.

If you have any questions, please contact us at: 
hcpcou...@humanconnectome.org

We look forward to seeing you in Vancouver!

Best,
2017 HCP Course Staff


Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-15 Thread Timothy B. Brown

Dear Qinqin Li,

Based on my checking so far, AWS credentials that give you access to the 
HCP_900 section of the S3 bucket should also give you access to the 
HCP_1200 section of the bucket.


One thing I would suggest is to go back to using the mount point 
provided by the NITRC-CE-HCP environment, but edit the system file that 
tells the system what to mount at /s3/hcp.


You will need to edit the file /etc/fstab. You will need to fire up the 
editor you use to make this change via sudo to be able to edit this file.


You should find a line in the /etc/fstab file that starts with:

   s3fs#hcp-openaccess:/HCP_900

Change the start of that line to:

   s3fs#hcp-openaccess:/HCP_1200

Once you make this change and /stop and restart your instance/, then 
what is mounted at /s3/hcp should be the 1200 subjects release data.


  Tim

On 05/15/2017 10:07 AM, Timothy B. Brown wrote:


Dear Qinqin Li,

First of all, you are correct that in using the latest version of the 
NITRC-CE for HCP, the 900 subjects release is mounted at /s3/hcp. We 
just recently got the data from the 1200 subjects release fully 
uploaded to the S3 bucket. I am working with the NITRC folks to get 
the AMI modified to mount the 1200 subjects release data.


As for using s3fs yourself to mount the HCP_1200 data, it seems to me 
that you are doing the right thing by putting your access key and 
secret access key in the ~/.passwd-s3fs file. I think that the 
credentials you have that gave you access to the HCP_900 data /should/ 
also give you access to the HCP_1200 data. I will be running a test 
shortly to verify that that is working as I expect. In the meantime, 
you can also do some helpful testing from your end.


Please try installing the AWS command line interface tool (see 
https://aws.amazon.com/cli ). Be sure to 
follow the configuration instructions at 
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html 
to run the aws configure command. This will get your AWS access key id 
and AWS secret access key into a configuration file for the AWS 
command line tool similar to way you've placed that information into a 
file for s3fs.


Then try issuing commands like the following:

$ aws s3 ls s3://hcp-openaccess/HCP_900/

$ aws s3 ls s3://hcp-openaccess/HCP_1200/

If both of these work and give you a long list of subject ID entries 
that look something like:


PRE 100206/
PRE 100307/
PRE 100408/
...

then your credentials are working for both the 900 subjects release 
and the 1200 subjects release.


If the HCP_900 listing works, but the HCP_1200 listing does not, then 
we will need to arrange for you to get different credentials.


  Tim

On 05/15/2017 08:48 AM, Irisqql0922 wrote:

Dear hcp teams,

I sorry to bother you again with same problem.

I used default options and mounted data successfully. But when I 
checked /s3/hcp, I found that data in it has only 900 subjects. 
Obviously, it's not the latest 1200-release data.



Since I want to analyse the latest version of data, I use s3fs to 
achieve my goal.

I use command:
: > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
s3fs hcp-openaccess /s3mnt -o passwd_file=~/.passwd-s3fs

It failed everytime. In the syslog file, I found error below:


I got my credential keys from connectome DB, and I quiet sure that I 
put it right in passwd-s3fs.


So I wonder, does my credential keys have access to hcp-openaccess 
when using s3fs to mount data? If the answer is yes, do you have any 
suggestion for me?


(note:  At first, I thought the problem may due to the  version of 
s3fs. So I created a new instance based on Amazon Linux AMI, and then 
download the lastest version of s3fs. But still, I failed because 
/'invalid credentials/')


thank you very much!

Best,

Qinqin Li

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, 
be advised that any unauthorized use, disclosure, copying or the 
taking of any action in reliance on the contents of this information 
is strictly prohibited. If you have received this email in error, 
please immediately notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



The materials in this message are private and may contain Protected 
Healthcare 

Re: [HCP-Users] task fMRI analysis

2017-05-15 Thread Harms, Michael

Hi,
I’m not sure what to make of those errors, but if the design.fsf doesn’t work 
with FEAT, that suggests that something is wrong with it.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: Nair Veena >
Date: Monday, May 15, 2017 at 2:13 PM
To: Michael Harms >
Subject: Re: [HCP-Users] task fMRI analysis


Hello again,

I tried that and get some errors (note- xxx below represents filepath).




bad operation list "": must be one or more of array, read, unset, or write
bad operation list "": must be one or more of array, read, unset, or write
while executing
"trace remove variable fmri(analysis) [ lindex $varcom 0 0] [ lindex $varcom 0 
1]"
(procedure "feat5:load" line 24)
invoked from within
"feat5:load .r 1 
/xxx/MNINonLinear/Results/tfMRI_MOTOR_AP/tfMRI_MOTOR_AP_hp200_s2_level1"
("eval" body line 1)
invoked from within
"eval "$command $outputfile1" "
(procedure "feat_file:invoke" line 35)
invoked from within
"feat_file:invoke .wdialog1 a a a :: {feat5:load .r 1}"
invoked from within
".wdialog1.f4.but_ok invoke"
("uplevel" body line 1)
invoked from within
"uplevel #0 [list $w invoke]"
(procedure "tk::ButtonUp" line 22)
invoked from within
"tk::ButtonUp .wdialog1.f4.but_ok"
(command bound to event)
--


Thank you.

From: Harms, Michael >
Sent: Thursday, May 11, 2017 5:27 PM
To: Nair Veena; 
hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] task fMRI analysis


Can you try running through the FEAT GUI using that design.fsf for this subject?

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: Nair Veena >
Date: Thursday, May 11, 2017 at 4:24 PM
To: Michael Harms >, 
"hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] task fMRI analysis


yes, below is a snapshot of the contents of the .feat folder

[cid:9064fb51-bc20-4800-bbed-7a925dcd3cd7]

thanks.


From: Harms, Michael >
Sent: Thursday, May 11, 2017 3:52 PM
To: Nair Veena; 
hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] task fMRI analysis


Is the ‘feat_model’ command getting run successfully?
There should be a .feat directory that gets created, and inside it there should 
be a number of design.* files.

cheers,
-MH

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: Nair Veena >
Date: Thursday, May 11, 2017 at 1:43 PM
To: Michael Harms >, 
"hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] task fMRI analysis


Thanks Michael.


It shows


/NumWaves   1
/NumContrasts 1

/Matrix
1



From: Harms, Michael >
Sent: Thursday, May 11, 2017 1:28:42 PM
To: Nair Veena; 
hcp-users@humanconnectome.org
Subject: Re: [HCP-Users] task fMRI analysis


What are the contents of the design.fts file that gets generated by the 
“feat_model” command?

--
Michael Harms, Ph.D.
---
Conte Center for the Neuroscience of Mental Disorders
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO  63110 Email: mha...@wustl.edu

From: 
>
 on behalf of Nair Veena 

Re: [HCP-Users] Infinite values in Group average data

2017-05-15 Thread Sang-Yun Oh
Thank you for the response.

I am, too, confused by some being non-zero finite values, and others being
infinities.

Before computing a correlation matrix, if standardized by subtracting the
mean and scaling by variance, all diagonal elements should be exactly 1.

What I am concerned about is how the whole matrix was computed, since a
fundamental characteristic of correlation matrix is not satisfied

Best,
Sang

On Mon, May 15, 2017 at 11:33 AM Timothy Coalson  wrote:

> Per the name "zcorr", the correlation values have been z-transformed
> (fisher's small z transform).  I am somewhat confused as to why some
> elements in the diagonal are not infinite.  The "true" value of applying
> this transform would be infinite on the entire diagonal, as arctanh(1) is
> infinite.  I am guessing this result was generated in matlab, as wb_command
> actually prevents infinities when using the z transform, putting a cap on
> the correlation (when not using z-transform, it shows correlations of 1 as
> expected).
>
> Whatever analysis you do with correlation matrices like this should ignore
> the diagonal anyway, since it is correlation to itself.
>
> Tim
>
>
> On Mon, May 15, 2017 at 3:57 AM, Sang-Yun Oh  wrote:
>
>> I downloaded group average functional correlation
>> file: HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii
>>
>> Some diagonal elements of the square matrix (91282x91282) are infinite
>> (Please see below).
>>
>> I want to use this matrix in ananalysis; however, I am not sure how to
>> understand or deal with infinite diagonal values.
>>
>> I appreciate any insight
>>
>> Thanks,
>> Sang
>>
>> ==
>>
>> In [1]: import nibabel
>>
>> In [2]: asdf =
>> nibabel.load('HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii')
>>
>> In [3]: img = asdf.get_data()
>>
>> In [4]: img.shape
>> Out[4]: (1, 1, 1, 1, 91282, 91282)
>>
>> In [5]: S = img[0,0,0,0,:,:]
>>
>> In [6]: S
>> Out[6]:
>> memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
>>   1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
>>[  1.96847185e-01,  inf,   3.36383432e-01, ...,
>>  -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
>>[  1.66294336e-01,   3.36383432e-01,  inf, ...,
>>  -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
>>...,
>>[  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
>>  inf,   1.91883039e+00,   9.20160294e-01],
>>[  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
>>   1.91883111e+00,   8.31776619e+00,   8.82132888e-01],
>>[  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
>>   9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
>> dtype=float32)
>>
>> In [7]: S.diagonal()
>> Out[7]:
>> memmap([ 8.66434002, inf, inf, ..., inf,
>> 8.31776619,  8.66434002], dtype=float32)
>>
>>
>> ___
>> HCP-Users mailing list
>> HCP-Users@humanconnectome.org
>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>
>
>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Regression of Physiological Noise -- updates?

2017-05-15 Thread Kristian M. Eschenburg
Hi all

Correct me if I'm wrong, but I believe that the HCP data has not yet had
physiological noise regressed out.  From previous threads, it seems the
general consensus was that implementing this capability was being worked
on.  Can we expect a Connectome Workbench tool for this any time soon?

Thanks



Kristian

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Regression of Physiological Noise -- updates?

2017-05-15 Thread Glasser, Matthew
Some physiological noise is regressed out through the ICA+FIX cleanup (where it 
is part of the noise components).  What is not regressed out is the global 
physiological noise, e.g. related to respiration.  We are indeed working on a 
method for doing this in a data-driven (as opposed to using the imperfect and 
incomplete physiological measures), but it is not yet ready for public 
consumption yet.

Peace,

Matt.

From: 
>
 on behalf of "Kristian M. Eschenburg" >
Date: Tuesday, May 16, 2017 at 7:17 AM
To: "hcp-users@humanconnectome.org" 
>
Subject: [HCP-Users] Regression of Physiological Noise -- updates?

Hi all

Correct me if I'm wrong, but I believe that the HCP data has not yet had 
physiological noise regressed out.  From previous threads, it seems the general 
consensus was that implementing this capability was being worked on.  Can we 
expect a Connectome Workbench tool for this any time soon?

Thanks



Kristian

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Infinite values in Group average data

2017-05-15 Thread Glasser, Matthew
Right.

Peace,

Matt.

From: 
>
 on behalf of Sang-Yun Oh >
Date: Tuesday, May 16, 2017 at 11:25 AM
To: Timothy Coalson >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] Infinite values in Group average data

This makes more sense! Sorry I missed your mention of fisher-z transform

So I would apply tanh to each element to revert back to regular correlation 
coefficients

Thank you for your help!

Best,
Sang

On Mon, May 15, 2017 at 5:57 PM Timothy Coalson 
> wrote:
After the fisher-z transform, you can have values greater than 1, see the graph 
on the right:

https://en.wikipedia.org/wiki/Fisher_transformation

This is why the "correct" answer for the diagonal is infinity for the "zcorr" 
file.

Tim


On Mon, May 15, 2017 at 7:51 PM, Sang-Yun Oh 
> wrote:
I am also finding that some off-diagonal elements in this matrix are also 
greater than 1 indicating this matrix is not a correlation matrix.

In [5]: img
Out[5]:
memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
  1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
   [  1.96847185e-01,  inf,   3.36383432e-01, ...,
 -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
   [  1.66294336e-01,   3.36383432e-01,  inf, ...,
 -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
   ...,
   [  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
 inf,   1.91883039e+00,   9.20160294e-01],
   [  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
  1.91883111e+00,   8.31776619e+00,   8.82132888e-01],
   [  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
  9.20160294e-01,   8.82132888e-01,   8.66434002e+00]], dtype=float32)

Any insight would be appreciated

Thanks,
Sang

On Mon, May 15, 2017 at 1:13 PM Sang-Yun Oh 
> wrote:
Thank you for the response.

I am, too, confused by some being non-zero finite values, and others being 
infinities.

Before computing a correlation matrix, if standardized by subtracting the mean 
and scaling by variance, all diagonal elements should be exactly 1.

What I am concerned about is how the whole matrix was computed, since a 
fundamental characteristic of correlation matrix is not satisfied

Best,
Sang

On Mon, May 15, 2017 at 11:33 AM Timothy Coalson 
> wrote:
Per the name "zcorr", the correlation values have been z-transformed (fisher's 
small z transform).  I am somewhat confused as to why some elements in the 
diagonal are not infinite.  The "true" value of applying this transform would 
be infinite on the entire diagonal, as arctanh(1) is infinite.  I am guessing 
this result was generated in matlab, as wb_command actually prevents infinities 
when using the z transform, putting a cap on the correlation (when not using 
z-transform, it shows correlations of 1 as expected).

Whatever analysis you do with correlation matrices like this should ignore the 
diagonal anyway, since it is correlation to itself.

Tim


On Mon, May 15, 2017 at 3:57 AM, Sang-Yun Oh 
> wrote:
I downloaded group average functional correlation file: 
HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii

Some diagonal elements of the square matrix (91282x91282) are infinite (Please 
see below).

I want to use this matrix in ananalysis; however, I am not sure how to 
understand or deal with infinite diagonal values.

I appreciate any insight

Thanks,
Sang

==

In [1]: import nibabel

In [2]: asdf = 
nibabel.load('HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii')

In [3]: img = asdf.get_data()

In [4]: img.shape
Out[4]: (1, 1, 1, 1, 91282, 91282)

In [5]: S = img[0,0,0,0,:,:]

In [6]: S
Out[6]:
memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
  1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
   [  1.96847185e-01,  inf,   3.36383432e-01, ...,
 -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
   [  1.66294336e-01,   3.36383432e-01,  inf, ...,
 -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
   ...,
   [  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
 inf,   1.91883039e+00,   9.20160294e-01],
   [  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
  1.91883111e+00,   8.31776619e+00,   8.82132888e-01],
   [  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
  9.20160294e-01,   8.82132888e-01,   8.66434002e+00]], dtype=float32)

In [7]: 

Re: [HCP-Users] Infinite values in Group average data

2017-05-15 Thread Sang-Yun Oh
This makes more sense! Sorry I missed your mention of fisher-z transform

So I would apply tanh to each element to revert back to regular correlation
coefficients

Thank you for your help!

Best,
Sang

On Mon, May 15, 2017 at 5:57 PM Timothy Coalson  wrote:

> After the fisher-z transform, you can have values greater than 1, see the
> graph on the right:
>
> https://en.wikipedia.org/wiki/Fisher_transformation
>
> This is why the "correct" answer for the diagonal is infinity for the
> "zcorr" file.
>
> Tim
>
>
> On Mon, May 15, 2017 at 7:51 PM, Sang-Yun Oh  wrote:
>
>> I am also finding that some off-diagonal elements in this matrix are also
>> greater than 1 indicating this matrix is not a correlation matrix.
>>
>> In [5]: img
>> Out[5]:
>> memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
>>   1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
>>[  1.96847185e-01,  inf,   3.36383432e-01, ...,
>>  -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
>>[  1.66294336e-01,   3.36383432e-01,  inf, ...,
>>  -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
>>...,
>>[  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
>>  inf,   1.91883039e+00,   9.20160294e-01],
>>[  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
>>  * 1.91883111e+00*,   8.31776619e+00,   8.82132888e-01],
>>[  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
>>   9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
>> dtype=float32)
>>
>> Any insight would be appreciated
>>
>> Thanks,
>> Sang
>>
>> On Mon, May 15, 2017 at 1:13 PM Sang-Yun Oh  wrote:
>>
>>> Thank you for the response.
>>>
>>> I am, too, confused by some being non-zero finite values, and others
>>> being infinities.
>>>
>>> Before computing a correlation matrix, if standardized by subtracting
>>> the mean and scaling by variance, all diagonal elements should be exactly 1.
>>>
>>> What I am concerned about is how the whole matrix was computed, since a
>>> fundamental characteristic of correlation matrix is not satisfied
>>>
>>> Best,
>>> Sang
>>>
>>> On Mon, May 15, 2017 at 11:33 AM Timothy Coalson  wrote:
>>>
 Per the name "zcorr", the correlation values have been z-transformed
 (fisher's small z transform).  I am somewhat confused as to why some
 elements in the diagonal are not infinite.  The "true" value of applying
 this transform would be infinite on the entire diagonal, as arctanh(1) is
 infinite.  I am guessing this result was generated in matlab, as wb_command
 actually prevents infinities when using the z transform, putting a cap on
 the correlation (when not using z-transform, it shows correlations of 1 as
 expected).

 Whatever analysis you do with correlation matrices like this should
 ignore the diagonal anyway, since it is correlation to itself.

 Tim


 On Mon, May 15, 2017 at 3:57 AM, Sang-Yun Oh  wrote:

> I downloaded group average functional correlation
> file: HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii
>
> Some diagonal elements of the square matrix (91282x91282) are infinite
> (Please see below).
>
> I want to use this matrix in ananalysis; however, I am not sure how to
> understand or deal with infinite diagonal values.
>
> I appreciate any insight
>
> Thanks,
> Sang
>
> ==
>
> In [1]: import nibabel
>
> In [2]: asdf =
> nibabel.load('HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii')
>
> In [3]: img = asdf.get_data()
>
> In [4]: img.shape
> Out[4]: (1, 1, 1, 1, 91282, 91282)
>
> In [5]: S = img[0,0,0,0,:,:]
>
> In [6]: S
> Out[6]:
> memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
>   1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
>[  1.96847185e-01,  inf,   3.36383432e-01, ...,
>  -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
>[  1.66294336e-01,   3.36383432e-01,  inf, ...,
>  -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
>...,
>[  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
>  inf,   1.91883039e+00,   9.20160294e-01],
>[  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
>   1.91883111e+00,   8.31776619e+00,   8.82132888e-01],
>[  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
>   9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
> dtype=float32)
>
> In [7]: S.diagonal()
> Out[7]:
> memmap([ 8.66434002, inf, inf, ..., inf,
> 8.31776619,  8.66434002], 

Re: [HCP-Users] Infinite values in Group average data

2017-05-15 Thread Sang-Yun Oh
I am also finding that some off-diagonal elements in this matrix are also
greater than 1 indicating this matrix is not a correlation matrix.

In [5]: img
Out[5]:
memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
  1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
   [  1.96847185e-01,  inf,   3.36383432e-01, ...,
 -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
   [  1.66294336e-01,   3.36383432e-01,  inf, ...,
 -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
   ...,
   [  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
 inf,   1.91883039e+00,   9.20160294e-01],
   [  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
 * 1.91883111e+00*,   8.31776619e+00,   8.82132888e-01],
   [  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
  9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
dtype=float32)

Any insight would be appreciated

Thanks,
Sang

On Mon, May 15, 2017 at 1:13 PM Sang-Yun Oh  wrote:

> Thank you for the response.
>
> I am, too, confused by some being non-zero finite values, and others being
> infinities.
>
> Before computing a correlation matrix, if standardized by subtracting the
> mean and scaling by variance, all diagonal elements should be exactly 1.
>
> What I am concerned about is how the whole matrix was computed, since a
> fundamental characteristic of correlation matrix is not satisfied
>
> Best,
> Sang
>
> On Mon, May 15, 2017 at 11:33 AM Timothy Coalson  wrote:
>
>> Per the name "zcorr", the correlation values have been z-transformed
>> (fisher's small z transform).  I am somewhat confused as to why some
>> elements in the diagonal are not infinite.  The "true" value of applying
>> this transform would be infinite on the entire diagonal, as arctanh(1) is
>> infinite.  I am guessing this result was generated in matlab, as wb_command
>> actually prevents infinities when using the z transform, putting a cap on
>> the correlation (when not using z-transform, it shows correlations of 1 as
>> expected).
>>
>> Whatever analysis you do with correlation matrices like this should
>> ignore the diagonal anyway, since it is correlation to itself.
>>
>> Tim
>>
>>
>> On Mon, May 15, 2017 at 3:57 AM, Sang-Yun Oh  wrote:
>>
>>> I downloaded group average functional correlation
>>> file: HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii
>>>
>>> Some diagonal elements of the square matrix (91282x91282) are infinite
>>> (Please see below).
>>>
>>> I want to use this matrix in ananalysis; however, I am not sure how to
>>> understand or deal with infinite diagonal values.
>>>
>>> I appreciate any insight
>>>
>>> Thanks,
>>> Sang
>>>
>>> ==
>>>
>>> In [1]: import nibabel
>>>
>>> In [2]: asdf =
>>> nibabel.load('HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii')
>>>
>>> In [3]: img = asdf.get_data()
>>>
>>> In [4]: img.shape
>>> Out[4]: (1, 1, 1, 1, 91282, 91282)
>>>
>>> In [5]: S = img[0,0,0,0,:,:]
>>>
>>> In [6]: S
>>> Out[6]:
>>> memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
>>>   1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
>>>[  1.96847185e-01,  inf,   3.36383432e-01, ...,
>>>  -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
>>>[  1.66294336e-01,   3.36383432e-01,  inf, ...,
>>>  -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
>>>...,
>>>[  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
>>>  inf,   1.91883039e+00,   9.20160294e-01],
>>>[  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
>>>   1.91883111e+00,   8.31776619e+00,   8.82132888e-01],
>>>[  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
>>>   9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
>>> dtype=float32)
>>>
>>> In [7]: S.diagonal()
>>> Out[7]:
>>> memmap([ 8.66434002, inf, inf, ..., inf,
>>> 8.31776619,  8.66434002], dtype=float32)
>>>
>>>
>>> ___
>>> HCP-Users mailing list
>>> HCP-Users@humanconnectome.org
>>> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>>>
>>
>>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Parcellation of subcortical areas

2017-05-15 Thread Glasser, Matthew
There are a variety of subcortical parcellations out there including these:

https://surfer.nmr.mgh.harvard.edu/fswiki/StriatumParcellation_Choi2012
http://www.freesurfer.net/fswiki/CerebellumParcellation_Buckner2011

These were made with a very different approach than the HCP’s multi-modal 
parcellation v1.0.  It would be possible to apply the approach we used to make 
the cortical parcellation to the subcortical parcellation, but this will be a 
big undertaking.

Peace,

Matt.

From: 
>
 on behalf of Timothy Coalson >
Date: Tuesday, May 16, 2017 at 4:22 AM
To: Guy Hwang >
Cc: "hcp-users@humanconnectome.org" 
>
Subject: Re: [HCP-Users] Parcellation of subcortical areas

The HCP-MMP1.0 does not contain subcortical parcellations.  This would be 
future work.  Others may be able to point you to some cerebellum/subcortical 
parcellations from other works.

As for "borders": subcortical areas, which are represented as volume-based 
structures, cannot have their boundaries represented as border files, because 
border files are only for surface-based structures (currently, just cerebral 
cortex).  The most similar concept to borders for volume structures (as opposed 
to the sheetlike cortex) would be surfaces (the boundary of a 3D object is a 2D 
surface).  This makes sense, as the cortical surfaces we use are derived from 
segmenting the cortex out of the volume.

Tim


On Mon, May 15, 2017 at 11:48 AM, Guy Hwang 
> wrote:

Hello,


I have been using wb_command -cifti-parcellate to extract time series using 
Q1-Q6_RelatedParcellation210.L.CorticalAreas_dil_Colors.32k_fs_LR.dlabel.nii 
(and R)


For subcortical areas, I see the border files, but not the dlabel.nii files on 
https://balsa.wustl.edu/study/show/RVVG. Can you help me find these? Or is 
there a way to extract timeseries from subcortical areas using other commands?


Thank you,

Guy

[https://balsa.wustl.edu/scene/image/4mlX]

A Multi-modal Parcellation of Human Cerebral 
Cortex
balsa.wustl.edu
SPECIES: Human DESCRIPTION: Understanding the amazingly complex human cerebral 
cortex requires a map (or parcellation) of its major subdivisions, known as 
cortical areas.





___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


The materials in this message are private and may contain Protected Healthcare 
Information or other information of a sensitive nature. If you are not the 
intended recipient, be advised that any unauthorized use, disclosure, copying 
or the taking of any action in reliance on the contents of this information is 
strictly prohibited. If you have received this email in error, please 
immediately notify the sender via telephone or return mail.

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Infinite values in Group average data

2017-05-15 Thread Timothy Coalson
After the fisher-z transform, you can have values greater than 1, see the
graph on the right:

https://en.wikipedia.org/wiki/Fisher_transformation

This is why the "correct" answer for the diagonal is infinity for the
"zcorr" file.

Tim


On Mon, May 15, 2017 at 7:51 PM, Sang-Yun Oh  wrote:

> I am also finding that some off-diagonal elements in this matrix are also
> greater than 1 indicating this matrix is not a correlation matrix.
>
> In [5]: img
> Out[5]:
> memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
>   1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
>[  1.96847185e-01,  inf,   3.36383432e-01, ...,
>  -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
>[  1.66294336e-01,   3.36383432e-01,  inf, ...,
>  -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
>...,
>[  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
>  inf,   1.91883039e+00,   9.20160294e-01],
>[  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
>  * 1.91883111e+00*,   8.31776619e+00,   8.82132888e-01],
>[  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
>   9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
> dtype=float32)
>
> Any insight would be appreciated
>
> Thanks,
> Sang
>
> On Mon, May 15, 2017 at 1:13 PM Sang-Yun Oh  wrote:
>
>> Thank you for the response.
>>
>> I am, too, confused by some being non-zero finite values, and others
>> being infinities.
>>
>> Before computing a correlation matrix, if standardized by subtracting the
>> mean and scaling by variance, all diagonal elements should be exactly 1.
>>
>> What I am concerned about is how the whole matrix was computed, since a
>> fundamental characteristic of correlation matrix is not satisfied
>>
>> Best,
>> Sang
>>
>> On Mon, May 15, 2017 at 11:33 AM Timothy Coalson  wrote:
>>
>>> Per the name "zcorr", the correlation values have been z-transformed
>>> (fisher's small z transform).  I am somewhat confused as to why some
>>> elements in the diagonal are not infinite.  The "true" value of applying
>>> this transform would be infinite on the entire diagonal, as arctanh(1) is
>>> infinite.  I am guessing this result was generated in matlab, as wb_command
>>> actually prevents infinities when using the z transform, putting a cap on
>>> the correlation (when not using z-transform, it shows correlations of 1 as
>>> expected).
>>>
>>> Whatever analysis you do with correlation matrices like this should
>>> ignore the diagonal anyway, since it is correlation to itself.
>>>
>>> Tim
>>>
>>>
>>> On Mon, May 15, 2017 at 3:57 AM, Sang-Yun Oh  wrote:
>>>
 I downloaded group average functional correlation
 file: HCP_S900_820_rfMRI_MSMAll_groupPCA_d4500ROW_zcorr.dconn.nii

 Some diagonal elements of the square matrix (91282x91282) are infinite
 (Please see below).

 I want to use this matrix in ananalysis; however, I am not sure how to
 understand or deal with infinite diagonal values.

 I appreciate any insight

 Thanks,
 Sang

 ==

 In [1]: import nibabel

 In [2]: asdf = nibabel.load('HCP_S900_820_rfMRI_MSMAll_groupPCA_
 d4500ROW_zcorr.dconn.nii')

 In [3]: img = asdf.get_data()

 In [4]: img.shape
 Out[4]: (1, 1, 1, 1, 91282, 91282)

 In [5]: S = img[0,0,0,0,:,:]

 In [6]: S
 Out[6]:
 memmap([[  8.66434002e+00,   1.96847185e-01,   1.66294336e-01, ...,
   1.01449557e-01,   7.45474100e-02,   1.15624115e-01],
[  1.96847185e-01,  inf,   3.36383432e-01, ...,
  -5.70017472e-03,  -5.49946353e-02,   3.72834280e-02],
[  1.66294336e-01,   3.36383432e-01,  inf, ...,
  -4.45242636e-02,  -6.07097335e-02,  -1.51601573e-02],
...,
[  1.01449557e-01,  -5.70017472e-03,  -4.45242636e-02, ...,
  inf,   1.91883039e+00,   9.20160294e-01],
[  7.45474100e-02,  -5.49946353e-02,  -6.07097335e-02, ...,
   1.91883111e+00,   8.31776619e+00,   8.82132888e-01],
[  1.15624115e-01,   3.72833721e-02,  -1.51601573e-02, ...,
   9.20160294e-01,   8.82132888e-01,   8.66434002e+00]],
 dtype=float32)

 In [7]: S.diagonal()
 Out[7]:
 memmap([ 8.66434002, inf, inf, ..., inf,
 8.31776619,  8.66434002], dtype=float32)


 ___
 HCP-Users mailing list
 HCP-Users@humanconnectome.org
 http://lists.humanconnectome.org/mailman/listinfo/hcp-users

>>>
>>>

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-15 Thread Timothy B. Brown

Dear Qinqin Li,

First of all, you are correct that in using the latest version of the 
NITRC-CE for HCP, the 900 subjects release is mounted at /s3/hcp. We 
just recently got the data from the 1200 subjects release fully uploaded 
to the S3 bucket. I am working with the NITRC folks to get the AMI 
modified to mount the 1200 subjects release data.


As for using s3fs yourself to mount the HCP_1200 data, it seems to me 
that you are doing the right thing by putting your access key and secret 
access key in the ~/.passwd-s3fs file. I think that the credentials you 
have that gave you access to the HCP_900 data /should/ also give you 
access to the HCP_1200 data. I will be running a test shortly to verify 
that that is working as I expect. In the meantime, you can also do some 
helpful testing from your end.


Please try installing the AWS command line interface tool (see 
https://aws.amazon.com/cli). Be sure to follow the configuration 
instructions at 
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html to 
run the aws configure command. This will get your AWS access key id and 
AWS secret access key into a configuration file for the AWS command line 
tool similar to way you've placed that information into a file for s3fs.


Then try issuing commands like the following:

   $ aws s3 ls s3://hcp-openaccess/HCP_900/

   $ aws s3 ls s3://hcp-openaccess/HCP_1200/

If both of these work and give you a long list of subject ID entries 
that look something like:


PRE 100206/
PRE 100307/
PRE 100408/
...

then your credentials are working for both the 900 subjects release and 
the 1200 subjects release.


If the HCP_900 listing works, but the HCP_1200 listing does not, then we 
will need to arrange for you to get different credentials.


  Tim

On 05/15/2017 08:48 AM, Irisqql0922 wrote:

Dear hcp teams,

I sorry to bother you again with same problem.

I used default options and mounted data successfully. But when I 
checked /s3/hcp, I found that data in it has only 900 subjects. 
Obviously, it's not the latest 1200-release data.



Since I want to analyse the latest version of data, I use s3fs to 
achieve my goal.

I use command:
: > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
s3fs hcp-openaccess /s3mnt -o passwd_file=~/.passwd-s3fs

It failed everytime. In the syslog file, I found error below:


I got my credential keys from connectome DB, and I quiet sure that I 
put it right in passwd-s3fs.


So I wonder, does my credential keys have access to hcp-openaccess 
when using s3fs to mount data? If the answer is yes, do you have any 
suggestion for me?


(note:  At first, I thought the problem may due to the  version of 
s3fs. So I created a new instance based on Amazon Linux AMI, and then 
download the lastest version of s3fs. But still, I failed because 
/'invalid credentials/')


thank you very much!

Best,

Qinqin Li

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/

The material in this message is private and may contain Protected 
Healthcare Information (PHI). If you are not the intended recipient, be 
advised that any unauthorized use, disclosure, copying or the taking of 
any action in reliance on the contents of this information is strictly 
prohibited. If you have received this email in error, please immediately 
notify the sender via telephone or return mail.


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-15 Thread Irisqql0922
Hi Tim,


I am glad to do the test, and I will let you how it goes :)




Best,
Qinqin Li


On 05/15/2017 23:07,Timothy B. Brown wrote:

Dear Qinqin Li,

First of all, you are correct that in using the latest version of the NITRC-CE 
for HCP, the 900 subjects release is mounted at /s3/hcp. We just recently got 
the data from the 1200 subjects release fully uploaded to the S3 bucket. I am 
working with the NITRC folks to get the AMI modified to mount the 1200 subjects 
release data.

As for using s3fs yourself to mount the HCP_1200 data, it seems to me that you 
are doing the right thing by putting your access key and secret access key in 
the ~/.passwd-s3fs file. I think that the credentials you have that gave you 
access to the HCP_900 data should also give you access to the HCP_1200 data. I 
will be running a test shortly to verify that that is working as I expect. In 
the meantime, you can also do some helpful testing from your end.

Please try installing the AWS command line interface tool (see 
https://aws.amazon.com/cli). Be sure to follow the configuration instructions 
at http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html to run 
the aws configure command. This will get your AWS access key id and AWS secret 
access key into a configuration file for the AWS command line tool similar to 
way you've placed that information into a file for s3fs.


Then try issuing commands like the following:

$ aws s3 ls s3://hcp-openaccess/HCP_900/

$ aws s3 ls s3://hcp-openaccess/HCP_1200/


If both of these work and give you a long list of subject ID entries that look 
something like:

PRE 100206/
PRE 100307/
PRE 100408/
...


then your credentials are working for both the 900 subjects release and the 
1200 subjects release.

If the HCP_900 listing works, but the HCP_1200 listing does not, then we will 
need to arrange for you to get different credentials.

  Tim


On 05/15/2017 08:48 AM, Irisqql0922 wrote:

Dear hcp teams,


I sorry to bother you again with same problem.


I used default options and mounted data successfully. But when I checked 
/s3/hcp, I found that data in it has only 900 subjects. Obviously, it's not the 
latest 1200-release data. 




Since I want to analyse the latest version of data, I use s3fs to achieve my 
goal. 
I use command:
: > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
s3fs hcp-openaccess /s3mnt -o passwd_file=~/.passwd-s3fs


It failed everytime. In the syslog file, I found error below:




I got my credential keys from connectome DB, and I quiet sure that I put it 
right in passwd-s3fs.


So I wonder, does my credential keys have access to hcp-openaccess when using 
s3fs to mount data? If the answer is yes, do you have any suggestion for me? 


(note:  At first, I thought the problem may due to the  version of s3fs. So I 
created a new instance based on Amazon Linux AMI, and then download the lastest 
version of s3fs. But still, I failed because 'invalid credentials')


thank you very much!


Best,


Qinqin Li



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu

The material in this message is private and may contain Protected Healthcare 
Information (PHI). If you are not the intended recipient, be advised that any 
unauthorized use, disclosure, copying or the taking of any action in reliance 
on the contents of this information is strictly prohibited. If you have 
received this email in error, please immediately notify the sender via 
telephone or return mail.
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users