gianm opened a new pull request, #15938:
URL: https://github.com/apache/druid/pull/15938

   The individual implementations know better when they should and should not 
retry. They can also generate better error messages.
   
   Most network-based deep storage implementations already have logic for 
retrying, except HDFS, which I added in this patch, and Azure. It looks like 
the Azure client itself may have some built-in retry stuff, but I'm not totally 
sure. That one might also need retry wrapping. If anyone has experience with 
Azure and knows the answer to this, please let me know. For now, I've left it 
without retry wrapping.
   
   The inspiration for this patch was a situation where EntityTooLarge was 
generated by the S3DataSegmentPusher, and retried uselessly by the retry 
harness in PartialSegmentMergeTask.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to