zentol commented on code in PR #22169:
URL: https://github.com/apache/flink/pull/22169#discussion_r1146047353


##########
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/adaptive/Executing.java:
##########
@@ -123,9 +123,18 @@ private void handleDeploymentFailure(ExecutionVertex 
executionVertex, JobExcepti
     }
 
     @Override
-    public void notifyNewResourcesAvailable() {
-        if (context.canScaleUp(getExecutionGraph())) {
-            getLogger().info("New resources are available. Restarting job to 
scale up.");
+    public void onNewResourcesAvailable() {
+        maybeRescale();
+    }
+
+    @Override
+    public void onNewResourceRequirements() {
+        maybeRescale();
+    }

Review Comment:
   > In case when some vertices were up-scaled and some downscaled, the 
cumulative parallelism won't change 
   
   This now raises an interesting question w.r.t. `minParallelismIncrease`.
   
   If the increase is smaller then the defined minimum, should we rescale? In 
https://github.com/apache/flink/pull/22169#discussion_r1144769916 we said that 
a scale down should happen regardless of the minimum.
   So what takes precedence now? :thinking: 
   
   Maybe we can stick to "scale down whenever possible" and revisit this 
separately. We need to think through the configuration to handle this, and I 
don't want to block the PR. It should ultimately end up being a small change 
contained to the `RescalingController`.
   I do think it's likely that someone will run into this; but even in the 
current state you could work around it by separating scale up/down operations.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to