Hi Daniel, I believe we've identified and resolved a bucket-side issue that will hopefully fix the issues you're seeing mounting the temp bucket through s3fs. While we had to use the latest version of s3fs (1.84) to get better diagnostics about the issue, mounting is working for us now using our previously installed version (1.79). The bucket-side issue was basically that, while the sync process from the old bucket to the new bucket generated object keys for the files, it didn't generate object keys for the top-level folders (e.g. HCP_1200). S3fs requires keys for the top-level folders to do the mount. After generating those, we're able to mount via s3fs.
I hope this resolves the issue for you as well. Regards, Mike From: [email protected] <[email protected]> On Behalf Of Elam, Jennifer Sent: Thursday, July 19, 2018 11:17 AM To: King, Daniel (Research Student) <[email protected]>; [email protected] Subject: Re: [HCP-Users] mounting temp bucket Hi Daniel, I asked Tim Coalson if he had any advice for you and he replied with the following in case it helps you in solving the problem: ________________________________ On Wed, Jul 18, 2018 at 10:05 PM, Timothy Coalson <[email protected]<mailto:[email protected]>> wrote: After some quick attempts at s3fs, I am also getting this error. I think the s3fs tools (or at least the version packaged in ubuntu 16.04) and the temporary bucket are not compatible for some reason. Maybe installing the latest s3fs from github would help? Not clear why it would have worked for the old bucket, though. Tim ________________________________ After trying this with the latest github release of s3fs (but the default ubuntu 16.04 fuse), I still get this error. Here is a probably related open bug on the s3fs tools: https://github.com/s3fs-fuse/s3fs-fuse/issues/721 [https://avatars2.githubusercontent.com/u/2044211?s=400&v=4]<https://github.com/s3fs-fuse/s3fs-fuse/issues/721> Invalid credentials (working in s3cmd) * Issue #721 * s3fs ...<https://github.com/s3fs-fuse/s3fs-fuse/issues/721> github.com I'm using the same IAM credentials on the same machine with s3cmd and it is working with normal access, but when using s3fs get invalid credentials message. If I use the url option, it changes behavior, but doesn't show any contents - the curl debug info indicates it tries to redirect back to the default of us-east-1 anyway. Tim ________________________________ Best, Jenn Jennifer Elam, Ph.D. Scientific Outreach, Human Connectome Project Washington University School of Medicine Department of Neuroscience, Box 8108 660 South Euclid Avenue St. Louis, MO 63110 314-362-9387<tel:314-362-9387> [email protected]<mailto:[email protected]> www.humanconnectome.org<http://www.humanconnectome.org/> ________________________________ From: [email protected] <[email protected]> on behalf of King, Daniel (Research Student) <[email protected]> Sent: Wednesday, July 18, 2018 5:13:52 AM To: [email protected] Subject: [HCP-Users] mounting temp bucket Hi HCP-list, I just wanted to inquire about the temp bucket location for the HCP data. I have previously been mounting the bucket data via s3fs with FUSE. However, since it has been migrated over to the temp bucket, I can no longer do so? I have regenerated the credentials (via connctomeDB) and am using a syntax identical to that which I used previously. Is there any advice you could give? Due to the timing of the issue I assume it is an issue with the bucket rather than s3fs etc. This is the syntax I am using and the subequent output by s3fs, it seems to not be finding the bucket. s3fs -f -o use_cache=~/tmp/cache_s3,uid=119393,gid=500,umask=0,retries=5 hcp-openaccess-temp:/HCP_1200 ~/OpenData [CRT] s3fs_init(3294): init v1.79(commit:unknown) with GnuTLS(gcrypt) [CRT] s3fs_check_service(3711): bucket not found - result of checking service. However, If I remove the HCP_1200 path and just leave the name of the bucket I just get the following : s3fs -f -o use_cache=~/tmp/cache_s3,uid=119393,gid=500,umask=0,retries=5 hcp-openaccess-temp ~/OpenData [CRT] s3fs_init(3294): init v1.79(commit:unknown) with GnuTLS(gcrypt) [CRT] s3fs_check_service(3707): invalid credentials - result of checking service. Any help would be greatly appreciated! Best wishes Daniel King PhD Candidate in Neurosciences, School of Life and Health Sciences, Aston University Email: [email protected] Twitter: @danieljking8 _______________________________________________ HCP-Users mailing list [email protected] http://lists.humanconnectome.org/mailman/listinfo/hcp-users _______________________________________________ HCP-Users mailing list [email protected] http://lists.humanconnectome.org/mailman/listinfo/hcp-users ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. _______________________________________________ HCP-Users mailing list [email protected] http://lists.humanconnectome.org/mailman/listinfo/hcp-users
