steveloughran commented on PR #7134:
URL: https://github.com/apache/hadoop/pull/7134#issuecomment-2442314469

   tested s3 london, -Dscale
   
   one really interesting AWS-side failure we've never seen before. Looks like 
a bulk delete hit a 500 error at the back end. Now we know what that looks like.
   
   This also highlights something important: irrespective of the availability 
assertions of AWS, things do fail and there is enough use of S3A code made 
every day that someone, somewhere, will hit them *every single day*. 
   
   Today it was me
   
   ```
   [ERROR] 
testMultiPagesListingPerformanceAndCorrectness(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
  Time elapsed: 73.877 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSS3IOException: 
   Remove S3 Files on 
s3a://stevel-london/job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness:
 org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException: 
[S3Error(Key=job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness/file-558,
 Code=InternalError, Message=We encountered an internal error. Please try 
again.)] (Service: Amazon S3, Status Code: 200, Request ID: 
null):MultiObjectDeleteException: InternalError: 
job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness/file-558: 
We encountered an internal error. Please try again.
   : 
[S3Error(Key=job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness/file-558,
 Code=InternalError, Message=We encountered an internal error. Please try 
again.)] (Service: Amazon S3, Status Code: 200, Request ID: null)
           at 
org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException.translateException(MultiObjectDeleteException.java:132)
           at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:350)
           at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
           at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:163)
           at 
org.apache.hadoop.fs.s3a.impl.DeleteOperation.asyncDeleteAction(DeleteOperation.java:431)
           at 
org.apache.hadoop.fs.s3a.impl.DeleteOperation.lambda$submitDelete$2(DeleteOperation.java:403)
           at 
org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$callableWithinAuditSpan$3(AuditingFunctions.java:119)
           at 
org.apache.hadoop.fs.s3a.impl.CallableSupplier.get(CallableSupplier.java:88)
           at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
           at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
           at 
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:750)
   Caused by: org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException: 
[S3Error(Key=job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness/file-558,
 Code=InternalError, Message=We encountered an internal error. Please try 
again.)] (Service: Amazon S3, Status Code: 200, Request ID: null)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:3278)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeysS3(S3AFileSystem.java:3478)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:3548)
           at 
org.apache.hadoop.fs.s3a.S3AFileSystem$OperationCallbacksImpl.removeKeys(S3AFileSystem.java:2653)
           at 
org.apache.hadoop.fs.s3a.impl.DeleteOperation.lambda$asyncDeleteAction$5(DeleteOperation.java:433)
           at org.apache.hadoop.fs.s3a.Invoker.lambda$once$0(Invoker.java:165)
           at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
           ... 11 more
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to