[
https://issues.apache.org/jira/browse/HADOOP-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15291318#comment-15291318
]
Steve Loughran commented on HADOOP-13130:
-----------------------------------------
Bad request here. The bucket exists, but isn't giving me access. This is
frankfurt
{code}
testDeleteEmptyDirNonRecursive(org.apache.hadoop.fs.contract.s3a.TestS3AContractDelete)
Time elapsed: 0.208 sec <<< ERROR!
org.apache.hadoop.fs.InvalidRequestException: doesBucketExist on
stevel-frankfurt-3: com.amazonaws.services.s3.model.AmazonS3Exception: Bad
Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request;
Request ID: 49355E46D8DFCA6B), S3 Extended Request ID:
KRFtrdEbdZdA4Z6ve2exgBmQArLniiq85f/yUf0NC+btW58ExNxoo3Omhe5Cup0QE7ub5lTes5U=
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:96)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:293)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:272)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2786)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2823)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2805)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:382)
at
org.apache.hadoop.fs.contract.AbstractBondedFSContract.init(AbstractBondedFSContract.java:72)
at
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:165)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}
> s3a failures can surface as RTEs, not IOEs
> ------------------------------------------
>
> Key: HADOOP-13130
> URL: https://issues.apache.org/jira/browse/HADOOP-13130
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.7.2
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Attachments: HADOOP-13130-001.patch, HADOOP-13130-002.patch,
> HADOOP-13130-002.patch, HADOOP-13130-003.patch, HADOOP-13130-004.patch,
> HADOOP-13130-005.patch, HADOOP-13130-branch-2-006.patch,
> HADOOP-13130-branch-2-007.patch, HADOOP-13130-branch-2-008.patch,
> HADOOP-13130-branch-2-009.patch
>
>
> S3A failures happening in the AWS library surface as
> {{AmazonClientException}} derivatives, rather than IOEs. As the amazon
> exceptions are runtime exceptions, any code which catches IOEs for error
> handling breaks.
> The fix will be to catch and wrap. The hard thing will be to wrap it with
> meaningful exceptions rather than a generic IOE. Furthermore, if anyone has
> been catching AWS exceptions, they are going to be disappointed. That means
> that fixing this situation could be considered "incompatible" —but only for
> code which contains assumptions about the underlying FS and the exceptions
> they raise.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]