Hi Qinqin Li,

If you leave the /etc/fstab file with s3fs#hcp-openaccess:/HCP_1200 in it instead of s3fs#hcp-openaccess:/HCP_900, then every time your system boots up, it should have the HCP_1200 data mounted at /s3/hcp. You should /not/ have to edit the /etc/fstab file again or issue a separate mount command to get access to the data each time you want to use it.

Using the AWS Command Line Interface (AWSCLI) tool is different from actually making the data available at a mount point. If the data is not mounted via s3fs, then you can always access it using commands like the aws s3 ls command that I asked you to use previously. However, in order for programs and scripts on your system (your instance) to open and use the files, you will then need to use aws commands to copy the files to your file system.

For example, given that we know that the file s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz exists, a command like:

   $ wb_view s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz

would */not/* be able to open that T1w.nii.gz file and allow you to view it. The s3 bucket doesn't supply an actual file system that allows this type of access. That is what s3fs is providing for you.

However, assuming you have a tmp subdirectory in your home directory, a pair of commands like:

   $ aws s3 cp
   s3://hcp-openaccess/HCP_1200/100206/MNINonLinear/T1w.nii.gz ~/tmp
   $ wb_view ~/tmp/T1w.nii.gz

would copy the T1w.nii.gz file from the S3 bucket to your ~/tmp directory and allow you to view it using Connectome Workbench.

There is also an aws s3 sync command that can be used to copy/synchronize whole "directories" of data from the S3 bucket. For example:

   $ aws s3 sync s3://hcp-openaccess/HCP_1200/100206 /data/100206

would copy the entire 100206 subject's data to the local directory /data/100206.

I should note that copying that entire directory means copying a fairly large amount of data. If you were copying it to a local machine (e.g. your own computer), this might take a long time (e.g. hours). In my experience, copying it from an S3 bucket to a running Amazon EC2 instance still takes a while (about 15 minutes), but this is much more reasonable. Also, the aws s3 sync command works somewhat like the standard Un*x rsync command in that it determines whether the files need to be copied before copying them. If any of the files already exist locally and are unchanged, then those files are not copied from the S3 bucket.

  Tim

On 05/16/2017 09:23 AM, Irisqql0922 wrote:
Hi Tim,

I change first line in /etc/fstab file to

s3fs#hcp-openaccess:/HCP_1200,

and it worked!!! Thank you very much!

But it's not very convenient if I do it every time when I need to mount 1200-release data. The test you ask me to do yesterday can mount 1200-release data directly, right?

Best,
Qinqin Li

On 05/16/2017 03:04,Timothy B. Brown<tbbr...@wustl.edu> <mailto:tbbr...@wustl.edu> wrote:

    Dear Qinqin Li,

    Based on my checking so far, AWS credentials that give you access
    to the HCP_900 section of the S3 bucket should also give you
    access to the HCP_1200 section of the bucket.

    One thing I would suggest is to go back to using the mount point
    provided by the NITRC-CE-HCP environment, but edit the system file
    that tells the system what to mount at /s3/hcp.

    You will need to edit the file /etc/fstab. You will need to fire
    up the editor you use to make this change via sudo to be able to
    edit this file.

    You should find a line in the /etc/fstab file that starts with:

        s3fs#hcp-openaccess:/HCP_900

    Change the start of that line to:

        s3fs#hcp-openaccess:/HCP_1200

    Once you make this change and /stop and restart your instance/,
    then what is mounted at /s3/hcp should be the 1200 subjects
    release data.

      Tim

    On 05/15/2017 10:07 AM, Timothy B. Brown wrote:

    Dear Qinqin Li,

    First of all, you are correct that in using the latest version of
    the NITRC-CE for HCP, the 900 subjects release is mounted at
    /s3/hcp. We just recently got the data from the 1200 subjects
    release fully uploaded to the S3 bucket. I am working with the
    NITRC folks to get the AMI modified to mount the 1200 subjects
    release data.

    As for using s3fs yourself to mount the HCP_1200 data, it seems
    to me that you are doing the right thing by putting your access
    key and secret access key in the ~/.passwd-s3fs file. I think
    that the credentials you have that gave you access to the HCP_900
    data /should/ also give you access to the HCP_1200 data. I will
    be running a test shortly to verify that that is working as I
    expect. In the meantime, you can also do some helpful testing
    from your end.

    Please try installing the AWS command line interface tool (see
    https://aws.amazon.com/cli <https://aws.amazon.com/cli>). Be sure
    to follow the configuration instructions at
    http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
    to run the aws configure command. This will get your AWS access
    key id and AWS secret access key into a configuration file for
    the AWS command line tool similar to way you've placed that
    information into a file for s3fs.

    Then try issuing commands like the following:

        $ aws s3 ls s3://hcp-openaccess/HCP_900/

        $ aws s3 ls s3://hcp-openaccess/HCP_1200/

    If both of these work and give you a long list of subject ID
    entries that look something like:

                            PRE 100206/
                            PRE 100307/
                            PRE 100408/
                            ...

    then your credentials are working for both the 900 subjects
    release and the 1200 subjects release.

    If the HCP_900 listing works, but the HCP_1200 listing does not,
    then we will need to arrange for you to get different credentials.

      Tim

    On 05/15/2017 08:48 AM, Irisqql0922 wrote:
    Dear hcp teams,

    I sorry to bother you again with same problem.

    I used default options and mounted data successfully. But when I
    checked /s3/hcp, I found that data in it has only 900 subjects.
    Obviously, it's not the latest 1200-release data.


    Since I want to analyse the latest version of data, I use s3fs
    to achieve my goal.
    I use command:
    <ACCESS Key ID>:<SECRETE ACCESS KEY> > ~/.passwd-s3fs
    chmod 600 ~/.passwd-s3fs
    s3fs hcp-openaccess /s3mnt -o passwd_file=~/.passwd-s3fs

    It failed everytime. In the syslog file, I found error below:


    I got my credential keys from connectome DB, and I quiet sure
    that I put it right in passwd-s3fs.

    So I wonder, does my credential keys have access to
    hcp-openaccess when using s3fs to mount data? If the answer is
    yes, do you have any suggestion for me?

    (note:  At first, I thought the problem may due to the  version
    of s3fs. So I created a new instance based on Amazon Linux AMI,
    and then download the lastest version of s3fs. But still, I
    failed because /'invalid credentials/')

    thank you very much!

    Best,

    Qinqin Li

    _______________________________________________
    HCP-Users mailing list
    HCP-Users@humanconnectome.org
    http://lists.humanconnectome.org/mailman/listinfo/hcp-users


-- /Timothy B. Brown
    Business & Technology Application Analyst III
    Pipeline Developer (Human Connectome Project)
    tbbrown(at)wustl.edu
    /
    ------------------------------------------------------------------------
    The material in this message is private and may contain Protected
    Healthcare Information (PHI). If you are not the intended
    recipient, be advised that any unauthorized use, disclosure,
    copying or the taking of any action in reliance on the contents
    of this information is strictly prohibited. If you have received
    this email in error, please immediately notify the sender via
    telephone or return mail.

    _______________________________________________
    HCP-Users mailing list
    HCP-Users@humanconnectome.org
    http://lists.humanconnectome.org/mailman/listinfo/hcp-users

    ------------------------------------------------------------------------

    The materials in this message are private and may contain
    Protected Healthcare Information or other information of a
    sensitive nature. If you are not the intended recipient, be
    advised that any unauthorized use, disclosure, copying or the
    taking of any action in reliance on the contents of this
    information is strictly prohibited. If you have received this
    email in error, please immediately notify the sender via
    telephone or return mail.


-- /Timothy B. Brown
    Business & Technology Application Analyst III
    Pipeline Developer (Human Connectome Project)
    tbbrown(at)wustl.edu
    /
    ------------------------------------------------------------------------
    The material in this message is private and may contain Protected
    Healthcare Information (PHI). If you are not the intended
    recipient, be advised that any unauthorized use, disclosure,
    copying or the taking of any action in reliance on the contents of
    this information is strictly prohibited. If you have received this
    email in error, please immediately notify the sender via telephone
    or return mail.

    _______________________________________________
    HCP-Users mailing list
    HCP-Users@humanconnectome.org <mailto:HCP-Users@humanconnectome.org>
    http://lists.humanconnectome.org/mailman/listinfo/hcp-users


--
/Timothy B. Brown
Business & Technology Application Analyst III
Pipeline Developer (Human Connectome Project)
tbbrown(at)wustl.edu
/
------------------------------------------------------------------------
The material in this message is private and may contain Protected Healthcare Information (PHI). If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

_______________________________________________
HCP-Users mailing list
HCP-Users@humanconnectome.org
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to