[ 
https://issues.apache.org/jira/browse/FLINK-35285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17849352#comment-17849352
 ] 

Trystan commented on FLINK-35285:
---------------------------------

[~gyfora] is there maybe another setting that can help tune this? At least on 
1.7.0, I often find that a max scale down factor of 0.5 (which seems to be 
essentially mandatory given the current computations) leads to an overshoot - 
so then it scales back up. For example 40 -> 20 -> 40 -> 24.

I'd prefer to be conservative on the scale down and set it to 0.2 or 0.3. In 
the case of maxParallelism=120, 0.3 would work for _this_ scale down, but 0.2 
would not - we would effectively have a minParallelism of 40 and never go below 
it. Yet in the case of current=120, max=120, maxScaleDown=.2, it works just 
fine - it'll scale to 96. The "good" minScaleFactors seem highly dependent on 
both the maxParallelism and currentParallelism.

It seems that the problem lies in this loop: 
https://github.com/apache/flink-kubernetes-operator/blob/fe3d24e4500d6fcaed55250ccc816546886fd1cf/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/JobVertexScaler.java#L299-L303

If we add 
{code:java}
&& p < currentParallelism {code}
to the loop we get the expected behavior on scale down. Of course, then other 
keygroup-optimized scale ups break. Perhaps there needs to be different loops 
for scale up / scale down?

Is there something obvious that I'm missing, something I can tune better?

> Autoscaler key group optimization can interfere with scale-down.max-factor
> --------------------------------------------------------------------------
>
>                 Key: FLINK-35285
>                 URL: https://issues.apache.org/jira/browse/FLINK-35285
>             Project: Flink
>          Issue Type: Bug
>          Components: Kubernetes Operator
>            Reporter: Trystan
>            Priority: Minor
>
> When setting a less aggressive scale down limit, the key group optimization 
> can prevent a vertex from scaling down at all. It will hunt from target 
> upwards to maxParallelism/2, and will always find currentParallelism again.
>  
> A simple test trying to scale down from a parallelism of 60 with a 
> scale-down.max-factor of 0.2:
> {code:java}
> assertEquals(48, JobVertexScaler.scale(60, inputShipStrategies, 360, .8, 8, 
> 360)); {code}
>  
> It seems reasonable to make a good attempt to spread data across subtasks, 
> but not at the expense of total deadlock. The problem is that during scale 
> down it doesn't actually ensure that newParallelism will be < 
> currentParallelism. The only workaround is to set a scale down factor large 
> enough such that it finds the next lowest divisor of the maxParallelism.
>  
> Clunky, but something to ensure it can make at least some progress. There is 
> another test that now fails, but just to illustrate the point:
> {code:java}
> for (int p = newParallelism; p <= maxParallelism / 2 && p <= upperBound; p++) 
> {
>     if ((scaleFactor < 1 && p < currentParallelism) || (scaleFactor > 1 && p 
> > currentParallelism)) {
>         if (maxParallelism % p == 0) {
>             return p;
>         }
>     }
> } {code}
>  
> Perhaps this is by design and not a bug, but total failure to scale down in 
> order to keep optimized key groups does not seem ideal.
>  
> Key group optimization block:
> [https://github.com/apache/flink-kubernetes-operator/blob/fe3d24e4500d6fcaed55250ccc816546886fd1cf/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/JobVertexScaler.java#L296C1-L303C10]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to