[ 
https://issues.apache.org/jira/browse/HDFS-5434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Buddy updated HDFS-5434:
------------------------

    Attachment: BlockPlacementPolicyMinPipelineSizeWithNodeGroup.java
                BlockPlacementPolicyMinPipelineSize.java

Arpit, that was a good idea about BlockPlacementPolicy. I implemented and 
attached two new policies that honor the minPipelineSize, one that extends the 
Default policy and one that extends the NodeGroup policy.

I think that this will work as well as the name node patch.
I personally prefer the name node patch because it seems cleaner to me. 
Determining the number of nodes in the pipeline seems independent of deciding 
which nodes to pick for the pipeline. If new block placement policies are 
implemented we may need to extend them to add this feature. (Would we be able 
to add these block placement policies to trunk?)

What do you guys think? Are we coming to any consensus on the best approach to 
solving this problem? We are definitely planning on supporting customers with 
replica count of 1 with the 2.4 release and do not want them to lose data, 
especially during a large ingest.


> Write resiliency for replica count 1
> ------------------------------------
>
>                 Key: HDFS-5434
>                 URL: https://issues.apache.org/jira/browse/HDFS-5434
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Buddy
>            Priority: Minor
>         Attachments: BlockPlacementPolicyMinPipelineSize.java, 
> BlockPlacementPolicyMinPipelineSizeWithNodeGroup.java, HDFS_5434.patch
>
>
> If a file has a replica count of one, the HDFS client is exposed to write 
> failures if the data node fails during a write. With a pipeline of size of 
> one, no recovery is possible if the sole data node dies.
> A simple fix is to force a minimum pipeline size of 2, while leaving the 
> replication count as 1. The implementation for this is fairly non-invasive.
> Although the replica count is one, the block will be written to two data 
> nodes instead of one. If one of the data nodes fails during the write, normal 
> pipeline recovery will ensure that the write succeeds to the surviving data 
> node.
> The existing code in the name node will prune the extra replica when it 
> receives the block received reports for the finalized block from both data 
> nodes. This results in the intended replica count of one for the block.
> This behavior should be controlled by a configuration option such as 
> {{dfs.namenode.minPipelineSize}}.
> This behavior can be implemented in {{FSNameSystem.getAdditionalBlock()}} by 
> ensuring that the pipeline size passed to 
> {{BlockPlacementPolicy.chooseTarget()}} in the replication parameter is:
> {code}
> max(replication, ${dfs.namenode.minPipelineSize})
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Reply via email to