[
https://issues.apache.org/jira/browse/HADOOP-14531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180730#comment-16180730
]
Steve Loughran commented on HADOOP-14531:
-----------------------------------------
And a randomly picked key sequence "fsfd" maps to a bucket which appears to
exist but has access disabled, raises Access Denied. That's a slightly
different text message than before
{code}
bin/hadoop s3guard bucket-info s3a://fdsd
2017-09-26 13:57:56,458 INFO s3a.S3ALambda: doesBucketExist on fdsd:
java.nio.file.AccessDeniedException: fdsd: doesBucketExist on fdsd:
com.amazonaws.services.s3.model.AmazonS3Exception: All access to this object
has been disabled (Service: Amazon S3; Status Code: 403; Error Code:
AllAccessDisabled; Request ID: E6229D7F8134E64F; S3 Extended Request ID:
6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=),
S3 Extended Request ID:
6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=:AllAccessDisabled
2017-09-26 13:57:56,459 WARN s3a.S3ALambda: doesBucketExist on fdsd failing
after 1 attempts: java.nio.file.AccessDeniedException: fdsd: doesBucketExist on
fdsd: com.amazonaws.services.s3.model.AmazonS3Exception: All access to this
object has been disabled (Service: Amazon S3; Status Code: 403; Error Code:
AllAccessDisabled; Request ID: E6229D7F8134E64F; S3 Extended Request ID:
6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=),
S3 Extended Request ID:
6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=:AllAccessDisabled
java.nio.file.AccessDeniedException: fdsd: doesBucketExist on fdsd:
com.amazonaws.services.s3.model.AmazonS3Exception: All access to this object
has been disabled (Service: Amazon S3; Status Code: 403; Error Code:
AllAccessDisabled; Request ID: E6229D7F8134E64F; S3 Extended Request ID:
6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=),
S3 Extended Request ID:
6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=:AllAccessDisabled
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:205)
at org.apache.hadoop.fs.s3a.S3ALambda.once(S3ALambda.java:122)
at org.apache.hadoop.fs.s3a.S3ALambda.lambda$retry$2(S3ALambda.java:233)
at
org.apache.hadoop.fs.s3a.S3ALambda.retryUntranslated(S3ALambda.java:288)
at org.apache.hadoop.fs.s3a.S3ALambda.retry(S3ALambda.java:228)
at org.apache.hadoop.fs.s3a.S3ALambda.retry(S3ALambda.java:203)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:357)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:293)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3311)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529)
at
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:997)
at
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:309)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1218)
at
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1227)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: All access to
this object has been disabled (Service: Amazon S3; Status Code: 403; Error
Code: AllAccessDisabled; Request ID: E6229D7F8134E64F; S3 Extended Request ID:
6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=),
S3 Extended Request ID:
6SzVz2t4qa8J2Wxo/oc8yBuB13Mgrn9uMKnxVY0hsBd2kU/YdHzW1IaujpJdDXRDCQRX3f1RYn0=
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4229)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4176)
at
com.amazonaws.services.s3.AmazonS3Client.getAcl(AmazonS3Client.java:3381)
at
com.amazonaws.services.s3.AmazonS3Client.getBucketAcl(AmazonS3Client.java:1160)
at
com.amazonaws.services.s3.AmazonS3Client.getBucketAcl(AmazonS3Client.java:1150)
at
com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1266)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:360)
at org.apache.hadoop.fs.s3a.S3ALambda.once(S3ALambda.java:120)
... 16 more
{code}
> Improve S3A error handling & reporting
> --------------------------------------
>
> Key: HADOOP-14531
> URL: https://issues.apache.org/jira/browse/HADOOP-14531
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Affects Versions: 2.8.1
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Blocker
>
> Improve S3a error handling and reporting
> this includes
> # looking at error codes and translating to more specific exceptions
> # better retry logic where present
> # adding retry logic where not present
> # more diagnostics in exceptions
> # docs
> Overall goals
> * things that can be retried and will go away are retried for a bit
> * things that don't go away when retried failfast (302, no auth, unknown
> host, connection refused)
> * meaningful exceptions are built in translate exception
> * diagnostics are included, where possible
> * our troubleshooting docs are expanded with new failures we encounter
> AWS S3 error codes:
> http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]