[
https://issues.apache.org/jira/browse/HDFS-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo (Nicholas), SZE updated HDFS-3504:
-----------------------------------------
Attachment: h3504_20120607.patch
h3504_20120607.patch: adds a new conf property dfs.client.retry.max for
DFSClient RPC retry policy as follows:
- If dfs.client.retry.max == 0, use TRY_ONCE_THEN_FAIL.
- If dfs.client.retry.max > 0, then
-# use exponentialBackoff for
-#* SafeModeException, or
-#* IOException other than RemoteException; and
-# use TRY_ONCE_THEN_FAIL for
-#* non-SafeMode RemoteException, or
-#* non-IOException.
Need to add some tests.
> Configurable retry in DFSClient
> -------------------------------
>
> Key: HDFS-3504
> URL: https://issues.apache.org/jira/browse/HDFS-3504
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 1.0.0, 2.0.0-alpha
> Reporter: Siddharth Seth
> Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h3504_20120607.patch
>
>
> When NN maintenance is performed on a large cluster, jobs end up failing.
> This is particularly bad for long running jobs. The client retry policy could
> be made configurable so that jobs don't need to be restarted.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira