Ruslan Dautkhanov created HDFS-12601:
----------------------------------------
Summary: Implement new hdfs balancer's threshold units
Key: HDFS-12601
URL: https://issues.apache.org/jira/browse/HDFS-12601
Project: Hadoop HDFS
Issue Type: Improvement
Components: balancer & mover
Affects Versions: 3.0.0-alpha3, 2.7.4, 2.6.5
Reporter: Ruslan Dautkhanov
Balancer threshold unit is inappropriate in most cases for new clusters, that
have a lot of capacity and small used%.
For example, in one of our new clusters HDFS capacity is *2.2 Pb* and only
*160Tb* used (across all DNs). So 1% threshold equals to *0.55* Tb (there are
40 nodes in this cluster) for `hdfs balancer -threshold` parameter.
Now we have some DNs that have as low as *3.5*Tb
and other DNs as high as *4.6* Tb.
So the actual disbalance is more like *24%*.
`hdfs balancer -threshold *1*` command says there is nothing to balance (and I
can't put smaller value than 1).
Balancer now thinks that the disbalance is less than 1% (based on full
capacity),
when it's actually 24%.
We see that those nodes with more data actually getting more processing tasks
(because of data locality).
It would be great to introduce a suffix for -threshold balancer parameter:
* 10c ('c' for `c`apacity) would mean 10% from DN's capacity (current behavior,
will default to 'c' if not specified so this change is backward compatible);
* 10u ('u' for `u`sed space variance across all DNs) would be measured as
%min_used / %max_used. For the example above, the cluster would get rebalanced
correctly as current disbalance is 24%.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]