[
https://issues.apache.org/jira/browse/HADOOP-18132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17495352#comment-17495352
]
John Zhuge edited comment on HADOOP-18132 at 2/21/22, 2:49 PM:
---------------------------------------------------------------
S3A already performs retries with exponential backoff on certain S3 errors. For
details, please check out
[https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Retry_and_Recovery].
was (Author: jzhuge):
S3A already performs retries on S3 errors. For details, please check out
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Retry_and_Recovery.
> S3A to support exponential backoff when throttled
> -------------------------------------------------
>
> Key: HADOOP-18132
> URL: https://issues.apache.org/jira/browse/HADOOP-18132
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Reporter: Holden Karau
> Priority: Major
>
> S3 API has limits which we can exceed when using a large number of
> writers/readers/or listers. We should add randomized-exponential back-off to
> the s3 client when it encounters:
>
> com.amazonaws.services.s3.model.AmazonS3Exception: Please reduce your request
> rate. (Service: Amazon S3; Status Code: 503; Error Code: SlowDown;
>
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]