Roman Shaposhnik created BIGTOP-2576:
----------------------------------------
Summary: For small clusters it is useful to turn
replace-datanode-on-failure off
Key: BIGTOP-2576
URL: https://issues.apache.org/jira/browse/BIGTOP-2576
Project: Bigtop
Issue Type: Improvement
Components: deployment
Affects Versions: 1.1.0
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
Fix For: 1.2.0
As per documentation in hdfs-default.xml
{noformat}
If there is a datanode/network failure in the write pipeline, DFSClient will
try to remove the failed datanode from the pipeline and then continue writing
with the remaining datanodes. As a result, the number of datanodes in the
pipeline is decreased. The feature is to add new datanodes to the pipeline.
This is a site-wide property to enable/disable the feature. When the cluster
size is extremely small, e.g. 3 nodes or less, cluster administrators may want
to set the policy to NEVER in the default configuration file or disable this
feature. Otherwise, users may experience an unusually high rate of pipeline
failures since it is impossible to find new datanodes for replacement. See also
dfs.client.block.write.replace-datanode-on-failure.policy
{noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)