[
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Lisheng Sun updated HDFS-6524:
------------------------------
Attachment: HDFS-6524.005.patch
> Choosing datanode retries times considering with block replica number
> ----------------------------------------------------------------------
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs-client
> Affects Versions: 3.0.0-alpha1
> Reporter: Liang Xie
> Assignee: Lisheng Sun
> Priority: Minor
> Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch,
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005.patch,
> HDFS-6524.005.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting:
> dfsClientConf.maxBlockAcquireFailures, which by default is 3
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better
> having another option, block replication factor. One cluster with only two
> block replica setting, or using Reed-solomon encoding solution with one
> replica factor. It helps to reduce the long tail latency.
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]