[
https://issues.apache.org/jira/browse/HDFS-1606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12993330#comment-12993330
]
Tsz Wo (Nicholas), SZE commented on HDFS-1606:
----------------------------------------------
Below are the proposed new configuration properties.
{code:xml}
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>ture</value>
<description>
If there is a datanode/network failure in the write pipeline,
DFSClient will try to remove the failed datanode from the pipeline
and then continue writing with the remaining datanodes. As a result,
the number of datanodes in the pipeline is decreased. The feature is
to add new datanodes to the pipeline.
This is a site-wise property to enable/disable the feature.
See also dfs.client.block.write.replace-datanode-on-failure.policy
</description>
</property>
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>DEFAULT</value>
<description>
This property is used only if the value of
dfs.client.block.write.replace-datanode-on-failure.enable is true.
ALWAYS: always add a new datanode when an existing datanode is removed.
NEVER: never add a new datanode.
DEFAULT: add a new datanode only if
(1) the number of datanodes in the pipeline drops from 2 to 1; or
(2) the block is reopened for append.
</description>
</property>
{code}
> Provide a stronger data guarantee in the write pipeline
> -------------------------------------------------------
>
> Key: HDFS-1606
> URL: https://issues.apache.org/jira/browse/HDFS-1606
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: data-node, hdfs client
> Reporter: Tsz Wo (Nicholas), SZE
> Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h1606_20110210.patch
>
>
> In the current design, if there is a datanode/network failure in the write
> pipeline, DFSClient will try to remove the failed datanode from the pipeline
> and then continue writing with the remaining datanodes. As a result, the
> number of datanodes in the pipeline is decreased. Unfortunately, it is
> possible that DFSClient may incorrectly remove a healthy datanode but leave
> the failed datanode in the pipeline because failure detection may be
> inaccurate under erroneous conditions.
> We propose to have a new mechanism for adding new datanodes to the pipeline
> in order to provide a stronger data guarantee.
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira