[
https://issues.apache.org/jira/browse/HDFS-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13915151#comment-13915151
]
Brandon Li commented on HDFS-6016:
----------------------------------
The patch looks good. Some nitpicks:
* also need to update hdfs-default.xml for the property description of
dfs.client.block.write.replace-datanode-on-failure.policy
* a couple typos in the comments of getMinimumNumberOfReplicasAllowed()
> Update datanode replacement policy to make writes more robust
> -------------------------------------------------------------
>
> Key: HDFS-6016
> URL: https://issues.apache.org/jira/browse/HDFS-6016
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: datanode, ha, hdfs-client, namenode
> Reporter: Kihwal Lee
> Assignee: Kihwal Lee
> Attachments: HDFS-6016.patch, HDFS-6016.patch
>
>
> As discussed in HDFS-5924, writers that are down to only one node due to node
> failures can suffer if a DN does not restart in time. We do not worry about
> writes that began with single replica.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)