xyuanlu commented on code in PR #2106:
URL: https://github.com/apache/helix/pull/2106#discussion_r881070767
##########
helix-core/src/main/java/org/apache/helix/controller/dataproviders/BaseControllerDataProvider.java:
##########
@@ -241,13 +244,87 @@ private void refreshClusterConfig(final HelixDataAccessor
accessor,
if
(_propertyDataChangedMap.get(HelixConstants.ChangeType.CLUSTER_CONFIG).getAndSet(false))
{
_clusterConfig =
accessor.getProperty(accessor.keyBuilder().clusterConfig());
refreshedType.add(HelixConstants.ChangeType.CLUSTER_CONFIG);
+ // TODO: This is a temp function to clean up incompatible batched
disabled instances format.
+ // Remove in later version.
+ if (checkBatchedDisabledInstanceFormat(_clusterConfig) &&
updateBatchDisableFormat(
+ accessor)) {
+ // read from zkz one more time
+ LogUtil.logInfo(logger, getClusterEventId(), String
+ .format("Clean ClusterConfig change for cluster %s, pipeline %s",
_clusterName,
+ getPipelineName()));
+ _clusterConfig =
accessor.getProperty(accessor.keyBuilder().clusterConfig());
Review Comment:
TFTR.
Waged will read disabled time in BestPossibleComputeState. If we keep the
uncleaned format, it will cause issue for the pipeline run.
I think the question is, is it ok for us to have several failed pipeline
until the change in cluster config is picked up?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]