[ 
https://issues.apache.org/jira/browse/HADOOP-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12624801#action_12624801
 ] 

Hudson commented on HADOOP-3685:
--------------------------------

Integrated in Hadoop-trunk #581 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/581/])

> Unbalanced replication target 
> ------------------------------
>
>                 Key: HADOOP-3685
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3685
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Koji Noguchi
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.17.2
>
>         Attachments: rereplicationPolicy.patch, rereplicationPolicy1.patch
>
>
> In HADOOP-3633, namenode was assigning some datanodes to receive  hundreds of 
> blocks in a short period which caused datanodes to go out of memroy(threads).
> Most of them were from remote rack.
> Looking at the code, 
> {noformat}
>     166           chooseLocalRack(results.get(1), excludedNodes, blocksize,
>     167                           maxNodesPerRack, results);
> {noformat}
> was sometimes not choosing the local rack of the writer(source).  
> As a result, when a datanode goes down, other datanodes on the same rack were 
> getting large number of blocks from remote racks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to