X-czh commented on code in PR #581:
URL:
https://github.com/apache/flink-kubernetes-operator/pull/581#discussion_r1183622806
##########
flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/JobVertexScaler.java:
##########
@@ -93,15 +94,27 @@ public int computeScaleTargetParallelism(
LOG.debug("Target processing capacity for {} is {}", vertex,
targetCapacity);
double scaleFactor = targetCapacity / averageTrueProcessingRate;
double minScaleFactor = 1 - conf.get(MAX_SCALE_DOWN_FACTOR);
+ double maxScaleFactor = 1 + conf.get(MAX_SCALE_UP_FACTOR);
if (scaleFactor < minScaleFactor) {
LOG.debug(
"Computed scale factor of {} for {} is capped by maximum
scale down factor to {}",
scaleFactor,
vertex,
minScaleFactor);
scaleFactor = minScaleFactor;
+ } else if (scaleFactor > maxScaleFactor) {
+ LOG.debug(
+ "Computed scale factor of {} for {} is capped by maximum
scale up factor to {}",
+ scaleFactor,
+ vertex,
+ maxScaleFactor);
+ scaleFactor = maxScaleFactor;
}
+ // Cap target capacity according to the capped scale factor
+ double cappedTargetCapacity = averageTrueProcessingRate * scaleFactor;
+ LOG.debug("Capped target processing capacity for {} is {}", vertex,
cappedTargetCapacity);
Review Comment:
If we cap the scale up factor, the actual parallelism increase would be
smaller, the expected processing rate would also be smaller proportionally and
needs to be capped as well. And thanks for the advice, I'll add an UT for that.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]