rmdmattingly commented on code in PR #6597:
URL: https://github.com/apache/hbase/pull/6597#discussion_r1912572293
##########
hbase-balancer/src/test/java/org/apache/hadoop/hbase/master/balancer/StochasticBalancerTestBase.java:
##########
@@ -70,38 +69,10 @@ protected void testWithClusterWithIteration(int numNodes,
int numRegions, int nu
assertFullyBalancedForReplicas);
}
- protected void testWithCluster(Map<ServerName, List<RegionInfo>> serverMap,
- RackManager rackManager, boolean assertFullyBalanced, boolean
assertFullyBalancedForReplicas) {
- List<ServerAndLoad> list = convertToList(serverMap);
- LOG.info("Mock Cluster : " + printMock(list) + " " + printStats(list));
-
- loadBalancer.setRackManager(rackManager);
- // Run the balancer.
- Map<TableName, Map<ServerName, List<RegionInfo>>> LoadOfAllTable =
- (Map) mockClusterServersWithTables(serverMap);
- List<RegionPlan> plans = loadBalancer.balanceCluster(LoadOfAllTable);
- assertNotNull("Initial cluster balance should produce plans.", plans);
-
- // Check to see that this actually got to a stable place.
- if (assertFullyBalanced || assertFullyBalancedForReplicas) {
- // Apply the plan to the mock cluster.
- List<ServerAndLoad> balancedCluster = reconcile(list, plans, serverMap);
-
- // Print out the cluster loads to make debugging easier.
- LOG.info("Mock after Balance : " + printMock(balancedCluster));
-
- if (assertFullyBalanced) {
- assertClusterAsBalanced(balancedCluster);
- LoadOfAllTable = (Map) mockClusterServersWithTables(serverMap);
- List<RegionPlan> secondPlans =
loadBalancer.balanceCluster(LoadOfAllTable);
- assertNull("Given a requirement to be fully balanced, second attempt
at plans should "
- + "produce none.", secondPlans);
- }
-
- if (assertFullyBalancedForReplicas) {
- assertRegionReplicaPlacement(serverMap, rackManager);
- }
- }
+ protected void increaseMaxRunTimeOrFail() {
+ long current = getCurrentMaxRunTimeMs();
+ assertTrue(current < MAX_MAX_RUN_TIME_MS);
+ setMaxRunTime(Math.max(MAX_MAX_RUN_TIME_MS, current * 2));
}
Review Comment:
My goal here was just to make this test framework ambitious, but forgiving.
If your cluster is too complicated to be evaluated meaningfully in the initial
runtime, then we should bump it up before failing
##########
hbase-balancer/src/test/java/org/apache/hadoop/hbase/master/balancer/BalancerTestBase.java:
##########
@@ -285,6 +285,9 @@ protected List<ServerAndLoad> convertToList(final
Map<ServerName, List<RegionInf
}
protected String printMock(List<ServerAndLoad> balancedCluster) {
+ if (balancedCluster == null) {
+ return "null";
+ }
Review Comment:
A null list here likely means a test failure, but you're likely to find a
more clear failure message somewhere other than here
##########
hbase-balancer/src/test/java/org/apache/hadoop/hbase/master/balancer/StochasticBalancerTestBase.java:
##########
@@ -70,38 +69,10 @@ protected void testWithClusterWithIteration(int numNodes,
int numRegions, int nu
assertFullyBalancedForReplicas);
}
- protected void testWithCluster(Map<ServerName, List<RegionInfo>> serverMap,
Review Comment:
I don't think there are any tests where it actually makes sense to run
without iteration
##########
hbase-balancer/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancerHeterogeneousCost.java:
##########
@@ -306,6 +245,10 @@ static class StochasticLoadTestBalancer extends
StochasticLoadBalancer {
private FairRandomCandidateGenerator fairRandomCandidateGenerator =
new FairRandomCandidateGenerator();
+ StochasticLoadTestBalancer() {
+ super(new DummyMetricsStochasticBalancer());
+ }
Review Comment:
The balancer test suite actually couldn't be run in its entirety without
this addition, because otherwise there are JMX conflicts. So this is just an
unrelated bug fix that I forgot to separate in my planning
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]