[ 
https://issues.apache.org/jira/browse/HADOOP-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17805170#comment-17805170
 ] 

ASF GitHub Bot commented on HADOOP-19027:
-----------------------------------------

steveloughran commented on PR #6425:
URL: https://github.com/apache/hadoop/pull/6425#issuecomment-1884981187

   Testing
   - new tests with mocking and of real tests where failure is shorter file 
than claimed in openFile(); no actual generation of failures within an ITest.
   - one presumably unrelated failure 
(https://issues.apache.org/jira/browse/HADOOP-19032)
   
   ```
   [ERROR] Tests run: 17, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 
48.129 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI
   [ERROR] 
testCreateDirectory(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI)
  Time elapsed: 13.4 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSS3IOException: 
   Remove S3 Dir Markers on 
s3a://stevel-london/Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-aws/target/test-dir/7/testContextURI/createTest:
 org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException: 
[S3Error(Key=Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-aws/target/test-dir/7/testContextURI/createTest/()&^%$#@!~_+}{><?/,
 Code=InternalError, Message=We encountered an internal error. Please try 
again.)] (Service: Amazon S3, Status Code: 200, Request ID: 
null):MultiObjectDeleteException: InternalError: 
Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-aws/target/test-dir/7/testContextURI/createTest/()&^%$#@!~_+}{><?/:
 We encountered an internal error. Please try again.
   : 
[S3Error(Key=Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-aws/target/test-dir/7/testContextURI/createTest/()&^%$#@!~_+}{><?/,
 Code=InternalError, Message=We encountered an internal error. Please try 
again.)] (Service: Amazon S3, Status Code: 200, Request ID: null)
           at 
org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException.translateException(MultiObjectDeleteException.java:136)
           at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:347)
           at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
           at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:163)
           at 
org.apache.hadoop.fs.s3a.impl.DeleteOperation.asyncDeleteAction(DeleteOperation.java:445)
           at 
org.apache.hadoop.fs.s3a.impl.DeleteOperation.lambda$submitDelete$2(DeleteOperation.java:403)
           at 
org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$callableWithinAuditSpan$3(AuditingFunctions.java:119)
           at 
org.apache.hadoop.fs.s3a.impl.CallableSupplier.get(CallableSupplier.java:88)
           at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
           at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
           at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:750)
   Caused by: org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException: 
[S3Error(Key=Users/stevel/Projects/hadoop-trunk/hadoop-tools/hadoop-aws/target/test-dir/7/testContextURI/createTest/()&^%$#@!~_+}{><?/,
 Code=InternalError, Message=We encountered an internal error. Please try 
again.)] (Service: Amazon S3, Status Code: 200, Request ID: null)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:3174)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeysS3(S3AFileSystem.java:3405)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:3475)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem$OperationCallbacksImpl.removeKeys(S3AFileSystem.java:2491)
           at 
org.apache.hadoop.fs.s3a.impl.DeleteOperation.lambda$asyncDeleteAction$8(DeleteOperation.java:447)
           at org.apache.hadoop.fs.s3a.Invoker.lambda$once$0(Invoker.java:165)
           at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
           ... 11 more
   
   ```
   




> S3A: S3AInputStream doesn't recover from HTTP/channel exceptions
> ----------------------------------------------------------------
>
>                 Key: HADOOP-19027
>                 URL: https://issues.apache.org/jira/browse/HADOOP-19027
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.4.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>
> S3AInputStream doesn't seem to recover from Http exceptions raised through 
> HttpClient or through OpenSSL.
> * review the recovery code to make sure it is retrying enough, it looks 
> suspiciously like it doesn't
> * detect the relevant openssl, shaded httpclient and unshaded httpclient 
> exceptions, map to a standard one and treat as comms error in our retry policy
> This is not the same as the load balancer/proxy returning 443/444 which we 
> map to AWSNoResponseException. We can't reuse that as it expects to be 
> created from an 
> {{software.amazon.awssdk.awscore.exception.AwsServiceException}} exception 
> with the relevant fields...changing it could potentially be incompatible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to