gyfora commented on code in PR #581:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/581#discussion_r1183604538


##########
flink-kubernetes-operator-autoscaler/src/main/java/org/apache/flink/kubernetes/operator/autoscaler/JobVertexScaler.java:
##########
@@ -93,15 +94,27 @@ public int computeScaleTargetParallelism(
         LOG.debug("Target processing capacity for {} is {}", vertex, 
targetCapacity);
         double scaleFactor = targetCapacity / averageTrueProcessingRate;
         double minScaleFactor = 1 - conf.get(MAX_SCALE_DOWN_FACTOR);
+        double maxScaleFactor = 1 + conf.get(MAX_SCALE_UP_FACTOR);
         if (scaleFactor < minScaleFactor) {
             LOG.debug(
                     "Computed scale factor of {} for {} is capped by maximum 
scale down factor to {}",
                     scaleFactor,
                     vertex,
                     minScaleFactor);
             scaleFactor = minScaleFactor;
+        } else if (scaleFactor > maxScaleFactor) {
+            LOG.debug(
+                    "Computed scale factor of {} for {} is capped by maximum 
scale up factor to {}",
+                    scaleFactor,
+                    vertex,
+                    maxScaleFactor);
+            scaleFactor = maxScaleFactor;
         }
 
+        // Cap target capacity according to the capped scale factor
+        double cappedTargetCapacity = averageTrueProcessingRate * scaleFactor;
+        LOG.debug("Capped target processing capacity for {} is {}", vertex, 
cappedTargetCapacity);

Review Comment:
   Also it would be good to have a unit test for the changed behaviour



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to