[ 
https://issues.apache.org/jira/browse/FLINK-24150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793103#comment-17793103
 ] 

chaoran.su commented on FLINK-24150:
------------------------------------

[~asardaes] this is not the same, if we need flink cluster's pod to be a 
graranteed qos level pod, we can not set limit-factor, which will make the 
pod's qos level to Burstable. For flink community, there is no way to have a 
resource config beyond flink configuration and still confirm the pod as a 
graranteed qos level pod.

> Support to configure cpu resource request and limit in pod template
> -------------------------------------------------------------------
>
>                 Key: FLINK-24150
>                 URL: https://issues.apache.org/jira/browse/FLINK-24150
>             Project: Flink
>          Issue Type: New Feature
>          Components: Deployment / Kubernetes
>            Reporter: Yang Wang
>            Priority: Major
>
> Why Flink needs to overwrite memory resource defined in pod template?
> The major reason is that Flink need to ensure the consistency between Flink 
> configuration
> (\{{taskmanager.memory.process.size}} , {{kubernetes.taskmanager.cpu}}) and 
> pod template resource settings. Since users could specify the total process 
> memory or detailed memory[2], Flink will calculate the pod resource 
> internally.
>  
> For the CPU case the template’s requests/limits should have priority if they 
> are specified. The factor could still be used if the template doesn’t specify 
> anything. The logic could be something like this:
>  # To choose CPU request
>  # Read pod template first
>  # If template doesn’t have anything, read from {{kubernetes.taskmanager.cpu}}
>  # If configuration is not specified, fall back to default
>  # To choose CPU limit
>  # Read from template first
>  # If template doesn’t have anything, apply factor to what was chosen in step 
> 1, where the default factor is 1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to