[
https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16643923#comment-16643923
]
Steve Loughran commented on HADOOP-15834:
-----------------------------------------
assuming that exists but inactive == capacity reallocation, we should catch &
log and use the batch retry policy. Key point: we can/should wait more than
just the SDK.
> Improve throttling on S3Guard DDB batch retries
> -----------------------------------------------
>
> Key: HADOOP-15834
> URL: https://issues.apache.org/jira/browse/HADOOP-15834
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.2.0
> Reporter: Steve Loughran
> Priority: Major
>
> the batch throttling may fail too fast
> if there's batch update of 25 writes but the default retry count is nine
> attempts, only nine writes of the batch may be attempted...even if each
> attempt is actually successfully writing data.
> In contrast, a single write of a piece of data gets the same no. of attempts,
> so 25 individual writes can handle a lot more throttling than a bulk write.
> Proposed: retry logic to be more forgiving of batch writes, such as not
> consider a batch call where at least one data item was written to count as a
> failure
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]