kennknowles opened a new issue, #19142:
URL: https://github.com/apache/beam/issues/19142

   S3FileSystem should have some retry behaviour if ObjectsDelete fails. I have 
seen such example in our job where 1 item from the delete batch cannot be 
deleted due to S3 InternalError causing the whole job to restart. The source 
code I am referring to:  
   
   
[https://github.com/apache/beam/blob/8a88e72f293ef7f9be6c872aa0dda681458c7ca5/sdks/java/io/amazon-web-services/src/main/java/org/apache/beam/sdk/io/aws/s3/S3FileSystem.java#L633](https://github.com/apache/beam/blob/8a88e72f293ef7f9be6c872aa0dda681458c7ca5/sdks/java/io/amazon-web-services/src/main/java/org/apache/beam/sdk/io/aws/s3/S3FileSystem.java#L633)
   
    
   
   The retry logic might be added to other S3 calls in S3FileSystem as well.
   
   Imported from Jira 
[BEAM-6031](https://issues.apache.org/jira/browse/BEAM-6031). Original Jira may 
contain additional context.
   Reported by: pawelbartoszek.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to