qqu0127 commented on code in PR #2189:
URL: https://github.com/apache/helix/pull/2189#discussion_r1028440466
##########
helix-core/src/main/java/org/apache/helix/controller/rebalancer/waged/WagedRebalancer.java:
##########
@@ -501,21 +512,64 @@ private void calculateAndUpdateBaseline(ClusterModel
clusterModel, RebalanceAlgo
_baselineCalcLatency.endMeasuringLatency();
LOG.info("Global baseline calculation completed and has been persisted
into metadata store.");
- if (isBaselineChanged && shouldSchedulePartialRebalance) {
+ if (isBaselineChanged && shouldTriggerMainPipeline) {
LOG.info("Schedule a new rebalance after the new baseline calculation
has finished.");
- RebalanceUtil.scheduleOnDemandPipeline(clusterName, 0L, false);
+ RebalanceUtil.scheduleOnDemandPipeline(clusterData.getClusterName(), 0L,
false);
}
}
- private Map<String, ResourceAssignment> partialRebalance(
+ private void partialRebalance(
ResourceControllerDataProvider clusterData, Map<String, Resource>
resourceMap,
Set<String> activeNodes, final CurrentStateOutput currentStateOutput,
RebalanceAlgorithm algorithm)
throws HelixRebalanceException {
+ // If partial rebalance is async and the previous result is not completed
yet,
+ // do not start another partial rebalance.
+ if (_asyncPartialRebalanceEnabled && _asyncPartialRebalanceResult != null
+ && !_asyncPartialRebalanceResult.isDone()) {
+ return;
Review Comment:
Yes, I'm trying to understand what it means if we enter this block.
Essentially, if the sync pipeline is triggered too frequently, we'll see more
attempts to start partial rebalance while there is one still in progress. Is
that right?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]