[ 
https://issues.apache.org/jira/browse/HBASE-25739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clara Xiong updated HBASE-25739:
--------------------------------
    Description: TableSkewCostFunction uses the sum of the max deviation region 
per server for all tables as the measure of unevenness. It doesn't work in a 
very common scenario in operations. Say we have 100 regions on 50 nodes, two on 
each. We add 50 new nodes and they have 0 each. The max deviation from the mean 
is 1, compared to 99 in the worst case scenario of 100 regions on a single 
server. The normalized cost is 1/99 = 0.011 < default threshold of 0.05. 
Balancer wouldn't move.  The proposal is to use aggregated deviation of the 
count per region server to detect this scenario, generating a cost of 3.1/31 = 
0.1 in this case.  (was: TableSkewCostFunction uses the sum of the max 
deviation region per server for all tables as the measure of unevenness. It 
doesn't work in a very common scenario in operations. Say we have 100 regions 
on 50 nodes, two on each. We add 50 new nodes and they have 0 each. The max 
deviation from the mean is 1, compared to 99 in the worst case scenario of 100 
regions on a single server. The normalized cost is 1/99 = 0.011 < default 
threshold of 0.05. Balancer wouldn't move.  The proposal is to use the standard 
deviation of the count per region server to detect this scenario, generating a 
cost of 3.1/31 = 0.1 in this case.)

> TableSkewCostFunction need to use aggregated deviation
> ------------------------------------------------------
>
>                 Key: HBASE-25739
>                 URL: https://issues.apache.org/jira/browse/HBASE-25739
>             Project: HBase
>          Issue Type: Sub-task
>          Components: Balancer, master
>            Reporter: Clara Xiong
>            Priority: Major
>
> TableSkewCostFunction uses the sum of the max deviation region per server for 
> all tables as the measure of unevenness. It doesn't work in a very common 
> scenario in operations. Say we have 100 regions on 50 nodes, two on each. We 
> add 50 new nodes and they have 0 each. The max deviation from the mean is 1, 
> compared to 99 in the worst case scenario of 100 regions on a single server. 
> The normalized cost is 1/99 = 0.011 < default threshold of 0.05. Balancer 
> wouldn't move.  The proposal is to use aggregated deviation of the count per 
> region server to detect this scenario, generating a cost of 3.1/31 = 0.1 in 
> this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to