[
https://issues.apache.org/jira/browse/HDFS-2981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13213103#comment-13213103
]
Todd Lipcon commented on HDFS-2981:
-----------------------------------
I disagree that it should be true by default. Apps like HBase which are
latency-sensitive don't want to wait for a whole block to be re-transferred
when a node in the pipeline fails. Apps which are long-running writes and
non-latency-sensitive (eg log collection) can flip this to true on their own
client, no?
> The default value of
> dfs.client.block.write.replace-datanode-on-failure.enable should be true
> ---------------------------------------------------------------------------------------------
>
> Key: HDFS-2981
> URL: https://issues.apache.org/jira/browse/HDFS-2981
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Tsz Wo (Nicholas), SZE
> Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h2981_20120221.patch
>
>
> There was a typo earlier in the default value of
> dfs.client.block.write.replace-datanode-on-failure.enable. Then, HDFS-2944
> changed from "ture" to "false". It should be changed to "true".
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira