[ 
https://issues.apache.org/jira/browse/HDFS-5958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13904908#comment-13904908
 ] 

Alexey Kovyrin commented on HDFS-5958:
--------------------------------------

Why not fix the default ones? Current behavior is clearly is bug, the balancer 
lies to a user's face by promising to move data around only to *silently* fail 
to do it and make another promise it could not keep.

> One very large node in a cluster prevents balancer from balancing data
> ----------------------------------------------------------------------
>
>                 Key: HDFS-5958
>                 URL: https://issues.apache.org/jira/browse/HDFS-5958
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: balancer
>    Affects Versions: 2.2.0
>         Environment: Hadoop cluster with 4 nodes: 3 with 500Gb drives and one 
> with 4Tb drive.
>            Reporter: Alexey Kovyrin
>
> In a cluster with a set of small nodes and one much larger node balancer 
> always selects the large node as the target even though it already has a copy 
> of each block in the cluster.
> This causes the balancer to enter an infinite loop and stop balancing other 
> nodes because each balancing iteration selects the same target and then could 
> not find a single block to move.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to