[ https://issues.apache.org/jira/browse/HADOOP-3799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
dhruba borthakur updated HADOOP-3799: ------------------------------------- Attachment: BlockPlacementPluggable2.txt I incorporated all of Tom's comments. Thanks Tom. This patch ensures that the following code paths conform to the configured block placement policy: 1. Allocation of a new block for a file 2. deletion of excess replicas of a block 3. creation of new replicas of a block 4. movement of blocks triggered by the balancer 5. verification of block placement by fsck > Design a pluggable interface to place replicas of blocks in HDFS > ---------------------------------------------------------------- > > Key: HADOOP-3799 > URL: https://issues.apache.org/jira/browse/HADOOP-3799 > Project: Hadoop Core > Issue Type: Improvement > Components: dfs > Reporter: dhruba borthakur > Assignee: dhruba borthakur > Attachments: BlockPlacementPluggable.txt, BlockPlacementPluggable2.txt > > > The current HDFS code typically places one replica on local rack, the second > replica on remote random rack and the third replica on a random node of that > remote rack. This algorithm is baked in the NameNode's code. It would be nice > to make the block placement algorithm a pluggable interface. This will allow > experimentation of different placement algorithms based on workloads, > availability guarantees and failure models. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.