[
https://issues.apache.org/jira/browse/HADOOP-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Nauroth updated HADOOP-12067:
-----------------------------------
Resolution: Not A Problem
Status: Resolved (was: Patch Available)
After further investigation, [~ivanmi] and [~onpduo] traced this to a problem
in the Azure Storage Java SDK. It would be incorrect to add throttling logic
at the Hadoop layer on top of what the SDK is supposed to provide already, so
we're going to resolve this without committing any Hadoop code changes.
Duo and Ivan, thank you for your efforts investigating this.
> Add exponential retry when copyblob is throttled by Azure storage
> -----------------------------------------------------------------
>
> Key: HADOOP-12067
> URL: https://issues.apache.org/jira/browse/HADOOP-12067
> Project: Hadoop Common
> Issue Type: Bug
> Components: tools
> Affects Versions: 2.7.0
> Reporter: Duo Xu
> Assignee: Duo Xu
> Attachments: HADOOP-12067.01.patch, HADOOP-12067.02.patch,
> HADOOP-12067.03.patch
>
>
> HADOOP-11693 passes exponential retry policy to Azure storage SDK when Azure
> storage throttling happens. However, when I looked at the source code of
> Azure storage SDK. Storage exceptions such as throttling exception are
> non-retryable.
> I would like to add retry in WASB driver instead of depending on Azure
> storage SDK to make sure retry happens when Azure storage throttling fires.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)