[ 
https://issues.apache.org/jira/browse/HADOOP-16188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794339#comment-16794339
 ] 

Steve Loughran commented on HADOOP-16188:
-----------------------------------------

AWS SDK transfer manager is meant to be doing the retry itself, hence the 
once() invocation of the operation: we don't bother retrying ourselves.

We need to question that assumption, but at the same time: not-double-retry on 
retry failures.

I'm starting to wonder if its time to stop relying on xfer manager and embrace 
some of its work ourselves? Or is that a distraction? 

For now: what about invoking the copy call with a retry policy which only 
retries on 200+ server: everything else we assume that transfer manager has 
done a best effort.


To backport this I'm going to cherry-pick the invoker code from the S3A 
committer into 3.0, branch-2, *but only the invoke/retry classes, *none of the 
actual usages*. It just sets things up for a fix for this





> s3a rename failed during copy, "Unable to copy part" + 200 error code
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-16188
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16188
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.0
>            Reporter: Steve Loughran
>            Priority: Minor
>
> Error during a rename where AWS S3 seems to have some internal error *which 
> is not retried and returns status code 200"
> {code}
> com.amazonaws.SdkClientException: Unable to copy part: We encountered an 
> internal error. Please try again. (Service: Amazon S3; Status Code: 200; 
> Error Code: InternalError;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to