-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/159/
-----------------------------------------------------------

Review request for hadoop-mapreduce, Dhruba Borthakur and Ramkumar Vadali.


Summary
-------

Raid introduce the new dependency between blocks within a file.
The blocks help decode each other. Therefore we should avoid put them on the 
same machine.
The proposed BlockPlacementPolicy does the following
1. When writing parity blocks, it avoid the parity blocks and source blocks sit 
together.
2. When reducing replication number, it deletes the blocks that sits with other 
dependent blocks.
3. It does not change the way we write normal files. It only has different 
behavior when processing raid files.


Diffs
-----

  
trunk/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/namenode/BlockPlacementPolicyRaid.java
 PRE-CREATION 
  trunk/src/contrib/raid/src/java/org/apache/hadoop/raid/RaidNode.java 1040840 
  
trunk/src/contrib/raid/src/test/org/apache/hadoop/hdfs/server/namenode/TestBlockPlacementPolicyRaid.java
 PRE-CREATION 

Diff: https://reviews.apache.org/r/159/diff


Testing
-------


Thanks,

Scott

Reply via email to