[ 
https://issues.apache.org/jira/browse/HADOOP-13664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526487#comment-15526487
 ] 

Steve Loughran commented on HADOOP-13664:
-----------------------------------------

There's something else to consider: would we ever want to have a retry policy 
for all the FS level operations (GET, HEAD), or should the code fail fast. I 
think we'd need to get more traces of in-the-field failures to see how easy it 
would be to distinguish transient network errors from unrecoverable problems. 
Presumably, any HTTP error code would be taken as unrecoverable; I don't know 
how the rest would present themselves or already get handled in the AWS 
stack...we know there is some retry logic there.

> S3AInputStream to use a retry policy on read failures
> -----------------------------------------------------
>
>                 Key: HADOOP-13664
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13664
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.8.0
>            Reporter: Steve Loughran
>            Priority: Minor
>
> {{S3AInputStream}} has some retry logic to handle failures on a read: log and 
> retry. We should move this over to a (possibly hard coded RetryPolicy with 
> some sleep logic, so that longer-than-just-transient read failures can be 
> handled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to