Buddy created HDFS-5434:
---------------------------
Summary: Write resiliency for replica count 1
Key: HDFS-5434
URL: https://issues.apache.org/jira/browse/HDFS-5434
Project: Hadoop HDFS
Issue Type: Bug
Components: namenode
Affects Versions: 2.2.0
Reporter: Buddy
Priority: Minor
If a file has a replica count of one, the HDFS client is exposed to write
failures if the data node fails during a write. With a pipeline of size of one,
no recovery is possible if the sole data node dies.
A simple fix is to force a minimum pipeline size of 2, while leaving the
replication count as 1. The implementation for this is fairly non-invasive.
Although the replica count is one, the block will be written to two data nodes
instead of one. If one of the data nodes fails during the write, normal
pipeline recovery will ensure that the write succeeds to the other data node.
The existing code in the name node will prune the extra replica when it
receives the block received reports for the finalized block from both data
nodes. This results in the intended replica count of one for the block.
This behavior should be controlled by a configuration option such as
dfs.namenode.minPipelineSize.
This behavior can be implemented in FSNameSystem.getAdditionalBlock by ensuring
that the pipeline size passed to BlockPlacementPolicy.chooseTarget in the
replication parameter is at least:
max(replication, ${dfs.namenode.minPipelineSize})
--
This message was sent by Atlassian JIRA
(v6.1#6144)