Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-16 Thread Timothy B. Brown

Hi Qinqin Li,

If you leave the /etc/fstab file with s3fs#hcp-openaccess:/HCP_1200 in 
it instead of s3fs#hcp-openaccess:/HCP_900, then every time your system 
boots up, it should have the HCP_1200 data mounted at /s3/hcp. You 
should /not/ have to edit the /etc/fstab file again or issue a separate 
mount command to get access to the data each time you want to use it.


Using the AWS Command Line Interface (AWSCLI) tool is different from 
actually making the data available at a mount point. If the data is not 
mounted via s3fs, then you can always access it using commands like the 
aws s3 ls command that I asked you to use previously. However, in order 
for programs and scripts on your system (your instance) to open and use 
the files, you will then need to use aws commands to copy the files to 
your file system.


For example, given that we know that the file 
s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz exists, a 
command like:


   $ wb_view s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz

would */not/* be able to open that T1w.nii.gz file and allow you to view 
it. The s3 bucket doesn't supply an actual file system that allows this 
type of access. That is what s3fs is providing for you.


However, assuming you have a tmp subdirectory in your home directory, a 
pair of commands like:


   $ aws s3 cp
   s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz ~/tmp
   $ wb_view ~/tmp/T1w.nii.gz

would copy the T1w.nii.gz file from the S3 bucket to your ~/tmp 
directory and allow you to view it using Connectome Workbench.


There is also an aws s3 sync command that can be used to 
copy/synchronize whole "directories" of data from the S3 bucket. For 
example:


   $ aws s3 sync s3://hcp-openaccess/HCP_1200/100206 /data/100206

would copy the entire 100206 subject's data to the local directory 
/data/100206.


I should note that copying that entire directory means copying a fairly 
large amount of data. If you were copying it to a local machine (e.g. 
your own computer), this might take a long time (e.g. hours). In my 
experience, copying it from an S3 bucket to a running Amazon EC2 
instance still takes a while (about 15 minutes), but this is much more 
reasonable. Also, the aws s3 sync command works somewhat like the 
standard Un*x rsync command in that it determines whether the files need 
to be copied before copying them. If any of the files already exist 
locally and are unchanged, then those files are not copied from the S3 
bucket.


  Tim

On 05/16/2017 09:23 AM, Irisqql0922 wrote:

Hi Tim,

I change first line in /etc/fstab file to

s3fs#hcp-openaccess:/HCP_1200,

and it worked!!! Thank you very much!

But it's not very convenient if I do it every time when I need to 
mount 1200-release data. The test you ask me to do yesterday can mount 
1200-release data directly, right?


Best,
Qinqin Li

On 05/16/2017 03:04,Timothy B. Brown 
 wrote:


Dear Qinqin Li,

Based on my checking so far, AWS credentials that give you access
to the HCP_900 section of the S3 bucket should also give you
access to the HCP_1200 section of the bucket.

One thing I would suggest is to go back to using the mount point
provided by the NITRC-CE-HCP environment, but edit the system file
that tells the system what to mount at /s3/hcp.

You will need to edit the file /etc/fstab. You will need to fire
up the editor you use to make this change via sudo to be able to
edit this file.

You should find a line in the /etc/fstab file that starts with:

s3fs#hcp-openaccess:/HCP_900

Change the start of that line to:

s3fs#hcp-openaccess:/HCP_1200

Once you make this change and /stop and restart your instance/,
then what is mounted at /s3/hcp should be the 1200 subjects
release data.

  Tim

On 05/15/2017 10:07 AM, Timothy B. Brown wrote:


Dear Qinqin Li,

First of all, you are correct that in using the latest version of
the NITRC-CE for HCP, the 900 subjects release is mounted at
/s3/hcp. We just recently got the data from the 1200 subjects
release fully uploaded to the S3 bucket. I am working with the
NITRC folks to get the AMI modified to mount the 1200 subjects
release data.

As for using s3fs yourself to mount the HCP_1200 data, it seems
to me that you are doing the right thing by putting your access
key and secret access key in the ~/.passwd-s3fs file. I think
that the credentials you have that gave you access to the HCP_900
data /should/ also give you access to the HCP_1200 data. I will
be running a test shortly to verify that that is working as I
expect. In the meantime, you can also do some helpful testing
from your end.

Please try installing the AWS command line interface tool (see
https://aws.amazon.com/cli ). Be sure
to follow the configuration instructions at

Re: [HCP-Users] Where are per-subject netmats?

2017-05-16 Thread Thomas Nichols
Thanks Jenn!  Sorry I missed that.

-Tom

On Tue, May 16, 2017 at 1:12 PM, Elam, Jennifer  wrote:

> Hi Tom,
> Due to size, we have the individual subject parcellations available in
> three separate downloads for different dimensionalities under the main PTN
> download on this page in the DB: https://db.humanconnectome.
> org/data/projects/HCP_1200
>
> Best,
> Jenn
>
> Jennifer Elam, Ph.D.
> Scientific Outreach, Human Connectome Project
> Washington University School of Medicine
> Department of Neuroscience, Box 8108
> 660 South Euclid Avenue
> St. Louis, MO 63110
> 314-362-9387
> e...@wustl.edu
> www.humanconnectome.org
>
> --
> *From:* hcp-users-boun...@humanconnectome.org  humanconnectome.org> on behalf of Thomas Nichols <
> t.e.nich...@warwick.ac.uk>
> *Sent:* Tuesday, May 16, 2017 5:16:15 AM
> *To:* HCP Users
> *Subject:* [HCP-Users] Where are per-subject netmats?
>
> Hi folks,
>
> When poking through the PTN download for the netmats, we're having trouble
> finding the netmat for each subject.
>
> As per the S1200 release manual
> ,
> pp 99-100, it says when we extract one of the flies like
>   netmats_3T_HCP820_MSMAll_ICAd*_ts*.tar.gz
> we should get a *_netmat1 directory filled with "One netmat file per
> subject, computed using full correlation, Z-transformed", and another
> variant in *_netmat2.
>
> Instead, we find that these tar.gz files only have 5 files, in a directory
> netmats/3T_HCP820_MSMAll_ICAd*_ts*.  Two of these are netmat{1,2}.txt
> files, but these are a single column and have a very strange number of rows
> (e.g. for d=50 it has 820 rows).  (There are also Mnet?.pconn.nii files,
> but these are tiny).
>
> Where can we find the per-subject netmat files that were in previous
> releases?
>
> -Tom
>
>
> --
> __
> Thomas Nichols, PhD
> Professor, Head of Neuroimaging Statistics
> Department of Statistics & Warwick Manufacturing Group
> University of Warwick, Coventry  CV4 7AL, United Kingdom
>
> Web: http://warwick.ac.uk/tenichols
> Email: t.e.nich...@warwick.ac.uk
> Tel, Stats: +44 24761 51086, WMG: +44 24761 50752
> Fx,  +44 24 7652 4532 <024%207652%204532>
>
> ___
> HCP-Users mailing list
> HCP-Users@humanconnectome.org
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>



-- 
__
Thomas Nichols, PhD
Professor, Head of Neuroimaging Statistics
Department of Statistics & Warwick Manufacturing Group
University of Warwick, Coventry  CV4 7AL, United Kingdom

Web: http://warwick.ac.uk/tenichols
Email: t.e.nich...@warwick.ac.uk
Tel, Stats: +44 24761 51086, WMG: +44 24761 50752
Fx,  +44 24 7652 4532

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-16 Thread Irisqql0922
Hi Tim,


I change first line in /etc/fstab file to 


s3fs#hcp-openaccess:/HCP_1200,


and it worked!!! Thank you very much!


But it's not very convenient if I do it every time when I need to mount 
1200-release data. The test you ask me to do yesterday can mount 1200-release 
data directly, right?


Best,
Qinqin Li


On 05/16/2017 03:04,Timothy B. Brown wrote:

Dear Qinqin Li,

Based on my checking so far, AWS credentials that give you access to the 
HCP_900 section of the S3 bucket should also give you access to the HCP_1200 
section of the bucket.

One thing I would suggest is to go back to using the mount point provided by 
the NITRC-CE-HCP environment, but edit the system file that tells the system 
what to mount at /s3/hcp.

You will need to edit the file /etc/fstab. You will need to fire up the editor 
you use to make this change via sudo to be able to edit this file.

You should find a line in the /etc/fstab file that starts with:

s3fs#hcp-openaccess:/HCP_900


Change the start of that line to:

s3fs#hcp-openaccess:/HCP_1200


Once you make this change and stop and restart your instance, then what is 
mounted at /s3/hcp should be the 1200 subjects release data.

  Tim


On 05/15/2017 10:07 AM, Timothy B. Brown wrote:


Dear Qinqin Li,

First of all, you are correct that in using the latest version of the NITRC-CE 
for HCP, the 900 subjects release is mounted at /s3/hcp. We just recently got 
the data from the 1200 subjects release fully uploaded to the S3 bucket. I am 
working with the NITRC folks to get the AMI modified to mount the 1200 subjects 
release data.

As for using s3fs yourself to mount the HCP_1200 data, it seems to me that you 
are doing the right thing by putting your access key and secret access key in 
the ~/.passwd-s3fs file. I think that the credentials you have that gave you 
access to the HCP_900 data should also give you access to the HCP_1200 data. I 
will be running a test shortly to verify that that is working as I expect. In 
the meantime, you can also do some helpful testing from your end.

Please try installing the AWS command line interface tool (see 
https://aws.amazon.com/cli). Be sure to follow the configuration instructions 
at http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html to run 
the aws configure command. This will get your AWS access key id and AWS secret 
access key into a configuration file for the AWS command line tool similar to 
way you've placed that information into a file for s3fs.


Then try issuing commands like the following:

$ aws s3 ls s3://hcp-openaccess/HCP_900/

$ aws s3 ls s3://hcp-openaccess/HCP_1200/


If both of these work and give you a long list of subject ID entries that look 
something like:

PRE 100206/
PRE 100307/
PRE 100408/
...


then your credentials are working for both the 900 subjects release and the 
1200 subjects release.

If the HCP_900 listing works, but the HCP_1200 listing does not, then we will 
need to arrange for you to get different credentials.

  Tim


On 05/15/2017 08:48 AM, Irisqql0922 wrote:

Dear hcp teams,


I sorry to bother you again with same problem.


I used default options and mounted data successfully. But when I checked 
/s3/hcp, I found that data in it has only 900 subjects. Obviously, it's not the 
latest 1200-release data. 




Since I want to analyse the latest version of data, I use s3fs to achieve my 
goal. 
I use command:
: > ~/.passwd-s3fs
chmod 600 ~/.passwd-s3fs
s3fs hcp-openaccess /s3mnt -o passwd_file=~/.passwd-s3fs


It failed everytime. In the syslog file, I found error below:




I got my credential keys from connectome DB, and I quiet sure that I put it 
right in passwd-s3fs.


So I wonder, does my credential keys have access to hcp-openaccess when using 
s3fs to mount data? If the answer is yes, do you have any suggestion for me? 


(note:  At first, I thought the problem may due to the  version of s3fs. So I 
created a new instance based on Amazon Linux AMI, and then download the lastest 
version of s3fs. But still, I failed because 'invalid credentials')


thank you very much!


Best,


Qinqin Li



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users



--
Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu

The material in this message is private and may contain Protected Healthcare 
Information (PHI). If you are not the intended recipient, be advised that any 
unauthorized use, disclosure, copying or the taking of any action in reliance 
on the contents of this information is strictly prohibited. If you have 
received this email in error, please immediately notify the sender via 
telephone or return mail.

___
HCP-Users mailing list

Re: [HCP-Users] Problem regarding to mount 1200-release data to NITRIC-CE

2017-05-16 Thread Irisqql0922
Hi Tim,


I forget to type echo in the mail, but I have typed it in the terminal. I am 
sorry I didn't put it right it the mail.  And I checked the file 
~/.passwd-s3fs, and it had expected secret key. 


I use @ Timothy B.Brown 's suggestion that change the first line in /etc/fstab 
to


s3fs#hcp-openaccess:/HCP_1200


and then stop and restart the instance. Then I use command 


mount  /s3/hcp


It worked!!


But I still don't know why I failed when I used s3fs to mount data. 


Best,


Qinqin Li  
On 05/16/2017 02:43,Timothy Coalson wrote:
On Mon, May 15, 2017 at 8:48 AM, Irisqql0922  wrote:

...
I use command:
: > ~/.passwd-s3fs


If this is really the command you used, then it wouldn't work: you need "echo" 
at the start of it, like this:


echo : > ~/.passwd-s3fs


Please look at the file's contents with something like "less ~/.passwd-s3fs" to 
double check that it contains what you expect (but don't post it to the list if 
it contains your secret key, obviously).


Tim



___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users
___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


Re: [HCP-Users] Where are per-subject netmats?

2017-05-16 Thread Elam, Jennifer
Hi Tom,
Due to size, we have the individual subject parcellations available in three 
separate downloads for different dimensionalities under the main PTN download 
on this page in the DB: https://db.humanconnectome.org/data/projects/HCP_1200

Best,
Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
e...@wustl.edu
www.humanconnectome.org


From: hcp-users-boun...@humanconnectome.org 
 on behalf of Thomas Nichols 

Sent: Tuesday, May 16, 2017 5:16:15 AM
To: HCP Users
Subject: [HCP-Users] Where are per-subject netmats?

Hi folks,

When poking through the PTN download for the netmats, we're having trouble 
finding the netmat for each subject.

As per the S1200 release 
manual,
 pp 99-100, it says when we extract one of the flies like
  netmats_3T_HCP820_MSMAll_ICAd*_ts*.tar.gz
we should get a *_netmat1 directory filled with "One netmat file per subject, 
computed using full correlation, Z-transformed", and another variant in 
*_netmat2.

Instead, we find that these tar.gz files only have 5 files, in a directory 
netmats/3T_HCP820_MSMAll_ICAd*_ts*.  Two of these are netmat{1,2}.txt files, 
but these are a single column and have a very strange number of rows (e.g. for 
d=50 it has 820 rows).  (There are also Mnet?.pconn.nii files, but these are 
tiny).

Where can we find the per-subject netmat files that were in previous releases?

-Tom


--
__
Thomas Nichols, PhD
Professor, Head of Neuroimaging Statistics
Department of Statistics & Warwick Manufacturing Group
University of Warwick, Coventry  CV4 7AL, United Kingdom

Web: http://warwick.ac.uk/tenichols
Email: t.e.nich...@warwick.ac.uk
Tel, Stats: +44 24761 51086, WMG: +44 24761 50752
Fx,  +44 24 7652 4532


___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users


[HCP-Users] Where are per-subject netmats?

2017-05-16 Thread Thomas Nichols
Hi folks,

When poking through the PTN download for the netmats, we're having trouble
finding the netmat for each subject.

As per the S1200 release manual
,
pp 99-100, it says when we extract one of the flies like
  netmats_3T_HCP820_MSMAll_ICAd*_ts*.tar.gz
we should get a *_netmat1 directory filled with "One netmat file per
subject, computed using full correlation, Z-transformed", and another
variant in *_netmat2.

Instead, we find that these tar.gz files only have 5 files, in a directory
netmats/3T_HCP820_MSMAll_ICAd*_ts*.  Two of these are netmat{1,2}.txt
files, but these are a single column and have a very strange number of rows
(e.g. for d=50 it has 820 rows).  (There are also Mnet?.pconn.nii files,
but these are tiny).

Where can we find the per-subject netmat files that were in previous
releases?

-Tom


-- 
__
Thomas Nichols, PhD
Professor, Head of Neuroimaging Statistics
Department of Statistics & Warwick Manufacturing Group
University of Warwick, Coventry  CV4 7AL, United Kingdom

Web: http://warwick.ac.uk/tenichols
Email: t.e.nich...@warwick.ac.uk
Tel, Stats: +44 24761 51086, WMG: +44 24761 50752
Fx,  +44 24 7652 4532

___
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users