amogh-jahagirdar commented on code in PR #11052:
URL: https://github.com/apache/iceberg/pull/11052#discussion_r1777340404
##########
aws/src/main/java/org/apache/iceberg/aws/s3/S3FileIOProperties.java:
##########
@@ -393,6 +403,21 @@ public class S3FileIOProperties implements Serializable {
*/
private static final String S3_FILE_IO_USER_AGENT = "s3fileio/" +
EnvironmentContext.get();
+ /** Number of times to retry S3 operations. */
+ public static final String S3_RETRY_NUM_RETRIES = "s3.retry.num-retries";
+
+ public static final int S3_RETRY_NUM_RETRIES_DEFAULT = 32;
Review Comment:
>One downside would be that if there is a more systemic error in
communications with S3, workloads will take longer time to fail, up to 10
minutes but I would argue that those will be more on the rare side compared to
the value it brings for eventually succeeding workloads.
Thanks for your patience, yeah this is exactly the part I'm mulling over. I
can see a rationale for retrying specifically throttling this many times for
larger workloads but if I'm reading this PR right, this number of retries is
for all retryable exceptions, so I'm double checking if this large level of
retries is too broad for all the possible retryable exceptions.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]