Alexey Serbin has posted comments on this change. ( 
http://gerrit.cloudera.org:8080/23909 )

Change subject: KUDU-3728 Add range-aware rebalancing to auto_rebalancer
......................................................................


Patch Set 1: Code-Review+1

(7 comments)

http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer-test.cc
File src/kudu/master/auto_rebalancer-test.cc:

http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer-test.cc@492
PS1, Line 492: TEST_F(AutoRebalancerTest, 
RangeAwareBuildClusterInfoGroupsByRange) {
It would be great to have a high-level description of what this test scenario 
verifies.  Meanwhile, adding extra comments in the code below could make the 
code more readable, helping readers to understand what exact invariants are 
being verified and why.


http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer-test.cc@534
PS1, Line 534: const string
nit: this could be a const reference, 'const string&' or 'const auto&'


http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer-test.cc@592
PS1, Line 592:
In addition to verifying internal invariants as in the newly added test 
scenario, does it make sense to verify that the auto-rebalancer properly 
re-distributes the replicas if they were originally placed in 
not-so-range-balanced manner?

With KUDU-3476 implemented, the placement of table replicas by the system 
catalog is no longer range-skewed as much as it was prior adding the KUDU-3476 
enhancement.  To achieve meaningful imbalance in that regard for a newly added 
range partition, it's necessary to artificially disable KUDU-3476, and then add 
a new tablet server -- a completely empty one, without any tablet replicas 
placed yet, while other tablet servers are hosting many tablet replicas 
already.  With the new empty tablet server in the test cluster, add a new range 
partition into already existing table or create a new table  with high number 
of hash buckets per one range.  Without KUDU-3476 improvement, too many tablet 
replicas backing the newly added range would be placed on the newly added 
tablet server.  When the auto-rebalancer runs in such an environment, it should 
take care of the imbalance, spreading out the newly created tablet replicas 
among all the available tablet servers in the test cluster.

Does it make sense to add a scenario similar to the described above?


http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer.cc
File src/kudu/master/auto_rebalancer.cc:

http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer.cc@20
PS1, Line 20: include <cstddef>
nit: move cstddef into the rest of the header group below


http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer.cc@147
PS1, Line 147: false
I guess this should be 'true'.  In a separate change, the 'kudu cluster 
rebalance' CLI tool already has the range-aware rebalancing enabled by default: 
https://gerrit.cloudera.org/#/c/23966/


http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer.cc@154
PS1, Line 154:
             : TAG_FLAG(auto_rebalancing_enable_range_rebalancing, advanced);
             : TAG_FLAG(auto_rebalancing_enable_range_rebalancing, runtime);
nit: move these to be adjacent to the auto_rebalancing_enable_range_rebalancing 
flag definition


http://gerrit.cloudera.org:8080/#/c/23909/1/src/kudu/master/auto_rebalancer.cc@614
PS1, Line 614: static_cast<uint16_t>(static_cast<uint8_t>
There is no need for two static casts.



--
To view, visit http://gerrit.cloudera.org:8080/23909
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: kudu
Gerrit-Branch: master
Gerrit-MessageType: comment
Gerrit-Change-Id: Idf716bef8c21395da98dd7c9e33001190d9b5587
Gerrit-Change-Number: 23909
Gerrit-PatchSet: 1
Gerrit-Owner: Gabriella Lotz <[email protected]>
Gerrit-Reviewer: Alexey Serbin <[email protected]>
Gerrit-Reviewer: Kudu Jenkins (120)
Gerrit-Comment-Date: Thu, 12 Feb 2026 22:46:29 +0000
Gerrit-HasComments: Yes

Reply via email to