[ 
https://issues.apache.org/jira/browse/FLINK-34563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17825380#comment-17825380
 ] 

Gyula Fora commented on FLINK-34563:
------------------------------------

Copying over my comment from GitHub for completeness:



I have some concerns about this change:
 # It doesn't work with custom slot sharing configuration which is very common
 # It provides almost no benefit with large taskmanager sizes / low number of 
task slots.
 # It goes against some basic design philosophy in the autoscaler such that we 
do not scale vertices beyond their target capacity. It ties to 
[@mxm|https://github.com/mxm] 's question why the logic wouldn't apply to all 
vertices?

Taking that one step further why don't we scale all vertices to the same 
parallelism at that point? That would naturally cause more resource usage and 
less throughput. By the same logic I don't think we should scale even the 
largest ones further.

> Autoscaling decision improvement
> --------------------------------
>
>                 Key: FLINK-34563
>                 URL: https://issues.apache.org/jira/browse/FLINK-34563
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kubernetes Operator
>    Affects Versions: kubernetes-operator-1.7.0
>            Reporter: Yang LI
>            Priority: Minor
>              Labels: pull-request-available
>
> Hi, I'd like to propose a minor improvement based on my autoscaling 
> experiments. The concept revolves around identifying the vertex with the 
> highest level of parallelism and matching it to the maximum parallelism 
> supported by our task manager.
> The primary goal of this enhancement is to prevent any task slots from 
> remaining unused after the Flink autoscaler performs a rescaling operation. 
> I've already tested this modification in a custom build of the operator, 
> excluding the memory tuning feature. However, I believe it could be 
> beneficial, especially in scenarios where the memory tuning feature is not 
> enabled.
> And I have prepared this small pr also :)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to