[ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cosmin Lehene updated HDFS-630:
-------------------------------

    Affects Version/s:     (was: 0.20.1)
               Status: Patch Available  (was: Open)

Adapted for 0.21 branch. 

Added excludedNodes back to BlockPlacementPolicy. 
Adapted to use HashMap<Node, Node> instead of List<Node> since 
BlockPlacementPolicyDefault was changed to use HashMap. However I'm not sure if 
it's supposed to be a HashMap... 
Luckily, Dhruba didn't removed the code that dealt with excludedNodes from 
BlockPlacementPolicyDefault so I only had to wire up the methods.


I also added a "unit" test - it's practically a functional test that spins up a 
DFSMiniCluster with 3 DataNodes and kills one before creating the file. 

> In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
> datanodes when locating the next block.
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-630
>                 URL: https://issues.apache.org/jira/browse/HDFS-630
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: hdfs client
>    Affects Versions: 0.21.0
>            Reporter: Ruyue Ma
>            Assignee: Ruyue Ma
>            Priority: Minor
>             Fix For: 0.21.0
>
>         Attachments: HDFS-630.patch
>
>
> created from hdfs-200.
> If during a write, the dfsclient sees that a block replica location for a 
> newly allocated block is not-connectable, it re-requests the NN to get a 
> fresh set of replica locations of the block. It tries this 
> dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
> each retry ( see DFSClient.nextBlockOutputStream).
> This setting works well when you have a reasonable size cluster; if u have 
> few datanodes in the cluster, every retry maybe pick the dead-datanode and 
> the above logic bails out.
> Our solution: when getting block location from namenode, we give nn the 
> excluded datanodes. The list of dead datanodes is only for one block 
> allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to