[ 
https://issues.apache.org/jira/browse/HADOOP-18132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-18132.
---------------------------------
    Resolution: Not A Problem

S3A already performs retries on S3 errors. For details, please check out 
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Retry_and_Recovery.

> S3 exponential backoff
> ----------------------
>
>                 Key: HADOOP-18132
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18132
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>            Reporter: Holden Karau
>            Priority: Major
>
> S3 API has limits which we can exceed when using a large number of 
> writers/readers/or listers. We should add randomized-exponential back-off to 
> the s3 client when it encounters:
>  
> com.amazonaws.services.s3.model.AmazonS3Exception: Please reduce your request 
> rate. (Service: Amazon S3; Status Code: 503; Error Code: SlowDown; 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to