[ 
https://issues.apache.org/jira/browse/HDFS-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13990433#comment-13990433
 ] 

Binglin Chang commented on HDFS-6159:
-------------------------------------

The fix in the patch has some issue:
bq. I propose to increase datanode capacity up to 6000B and data block size to 
100B.
{code}
  static final int DEFAULT_BLOCK_SIZE = 100;
{code}
this variable is not used anywhere, change it does not change block size, hence 
capacity is changed to 6000, block size remains 10 bytes actually leads more 
blocks needs to be moved, hence increase the total balancer running time, more 
likely to cause timeout.


> TestBalancerWithNodeGroup.testBalancerWithNodeGroup fails if there is block 
> missing after balancer success
> ----------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6159
>                 URL: https://issues.apache.org/jira/browse/HDFS-6159
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 2.3.0
>            Reporter: Chen He
>            Assignee: Chen He
>             Fix For: 3.0.0, 2.5.0
>
>         Attachments: HDFS-6159-v2.patch, HDFS-6159-v2.patch, HDFS-6159.patch, 
> logs.txt
>
>
> The TestBalancerWithNodeGroup.testBalancerWithNodeGroup will report negative 
> false failure if there is(are) data block(s) losing after balancer 
> successfuly finishes. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to