[
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455557#comment-15455557
]
Steve Loughran commented on HADOOP-13560:
-----------------------------------------
This is a (potentially transient) failure trying to commit the operation. This
could be pretty dramatic when trying to commit a large file: better to have a
retry policy. Going for a hard-coded
{{retryUpToMaximumCountWithProportionalSleep}} policy initially. We may need to
determine what if any error code there is for unknown/already completed upload,
and not retry on those, instead treatng them as a sign the operation previously
completed.
Also: do the same for abort()
> S3A to support huge file writes and operations -with tests
> ----------------------------------------------------------
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.9.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really
> works
> 1. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very
> large commit operations for committers using rename
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]