I would suggest caution as to syncing the data, mounting may suit your
purposes better (can look around and fetch file contents as-needed rather
than getting everything even if it isn't used).  The different releases are
in different folders, and basically all of the earlier release subjects are
also included in every later release, so there is a lot of logical
redundancy (however, the processing was changed between releases, so the
"redundant" files aren't identical).  So, for starters, you may be better
off sticking to just the HCP_1200 folder.

However, be aware that each subjects' folder comes out to around 80GB, so
you are looking at transferring about 100TB of data for just the HCP_1200
folder - make sure you actually need everything you are downloading, and
have the space available for it.

Tim


On Thu, Aug 9, 2018 at 11:42 AM, Xavier Guell Paradis <[email protected]>
wrote:

> Hello HCP,
> We are a team of researchers designing some deep learning tools we have
> created for modality conversions, and wanting to test our library on the
> HCP dataset (structural T1 and T2). We are having issues copying from the
> s3 bucket. Currently after creating our credentials, running aws configure,
> and attempting to sync via:
>
> $aws configure
> [entering access credentials]
> $aws s3 sync s3://hcp-openaccess-temp .
> Ïnvalid Access Key ID¨AWS key does not exist in the records.
>
> We appreciate any recommendations on how to proceed.
>
> Thank you,
> Patrick, Xavier, TJ, Anita, Shreyas, Saige.
>
> _______________________________________________
> HCP-Users mailing list
> [email protected]
> http://lists.humanconnectome.org/mailman/listinfo/hcp-users
>

_______________________________________________
HCP-Users mailing list
[email protected]
http://lists.humanconnectome.org/mailman/listinfo/hcp-users

Reply via email to