Please send email to user-unsubscr...@flink.apache.org if you want to 
unsubscribe the mail from user@flink.apache.org, and you can refer [1][2] for 
more details.
请发送任意内容的邮件到 user-unsubscr...@flink.apache.org 地址来取消订阅来自 user@flink.apache.org 
邮件组的邮件,你可以参考[1][2] 管理你的邮件订阅。

Best,
Leonard
[1] https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2] https://flink.apache.org/community.html#mailing-lists

> On Jun 15, 2023, at 10:40 AM, yanglele via user <user@flink.apache.org> wrote:
> 
> 
> 
> Unsubscribe
> 
> 
> 
> 
> 
> 
> 
> 
> ----- 原始邮件 -----
> 
> 
> 
> 发件人:Robin Cassan via user<user@flink.apache.org>
> 
> 发送时间:2023-06-14 23:13:09
> 
> 收件人:Gyula Fóra<gyula.f...@gmail.com>
> 
> 抄送:user<user@flink.apache.org>
> 
> 主 题:Re: Kubernetes operator: config for taskmanager.memory.process.size 
> ignored
> 
> Thanks again, maybe the jvm overhead param will act as the margin I want, 
> I'll try that :)
> Robin
> 
> 
> Le mer. 14 juin 2023 à 15:28, Gyula Fóra <gyula.f...@gmail.com 
> <mailto:gyula.f...@gmail.com>> a écrit :
> Again, this has absolutely nothing to do with the Kubernetes Operator, but 
> simply how Flink Kubernetes Memory configs work:
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/memory/mem_tuning/#configure-memory-for-containers
>  
> <https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/memory/mem_tuning/#configure-memory-for-containers>
>  
> 
>  
> You can probably play around with:  jobmanager.memory.jvm-overhead.fraction
> 
>  
> You can set a larger memory size in the TM spec and increase the jvm overhead 
> fraction.
> 
>  
> Gyula
> 
>  
> On Wed, Jun 14, 2023 at 2:46 PM Robin Cassan <robin.cas...@contentsquare.com 
> <mailto:robin.cas...@contentsquare.com>> wrote:
>  
> Thanks Gyula for your answer! I'm wondering about your claim:
>      > In Flink kubernetes the process is the pod so pod memory is always 
> equal to process memory
> Why should the flink TM process use the whole container (and so, the whole 
> pod) memory?
> 
>    
> Before migrating to the k8s operator, we still used Flink on kubernetes 
> (without the operator) and left a little bit of margin between the process 
> memory and the pod memory, which helped stability. It looks like it cannot be 
> done with the k8s operator though and I wonder why the choice of removing 
> this granularity in the settings
> 
>    
> Robin
> 
>    
> Le mer. 14 juin 2023 à 12:20, Gyula Fóra <gyula.f...@gmail.com 
> <mailto:gyula.f...@gmail.com>> a écrit :
>    
> Basically what happens is that whatever you set to the 
> spec.taskManager.resource.memory will be set in the config as process memory.
> In Flink kubernetes the process is the pod so pod memory is always equal to 
> process memory.
> 
>        
> So basically the spec is a config shorthand, there is no reason to override 
> it as you won't get a different behaviour at the end of the day.
> 
>        
> Gyula
> 
>        
> On Wed, Jun 14, 2023 at 11:55 AM Robin Cassan via user <user@flink.apache.org 
> <mailto:user@flink.apache.org>> wrote:
>        
> Hello all!
> 
>          
> I am using the flink kubernetes operator and I would like to set the value 
> for `taskmanager.memory.process.size`. I set the desired value in the 
> flinkdeployment resource specs (here, I want 55gb), however it looks like the 
> value that is effectively passed to the taskmanager is the same as the pod 
> memory setting (which is set to 59gb).
> 
>          
> For example, this flinkdeployment configuration:
>          
> ```
> Spec:
>              Flink Configuration:
>          
>     taskmanager.memory.process.size:                                  55gb
>          
>   Task Manager:
>                Resource:
>                  Cpu:     6
>                  Memory:  59Gb
>          
> ```
> will create a pod with 59Gb total memory (as expected) but will also give 
> 59Gb to the memory.process.size instead of 55Gb, as seen in this TM log: 
> `Loading configuration property: taskmanager.memory.process.size, 59Gb`
>          
> 
>          Maybe this part of the flink k8s operator code is responsible:
>          
> https://github.com/apache/flink-kubernetes-operator/blob/d43e1ca9050e83b492b2e16b0220afdba4ffa646/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java#L393
>  
> <https://github.com/apache/flink-kubernetes-operator/blob/d43e1ca9050e83b492b2e16b0220afdba4ffa646/flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/FlinkConfigBuilder.java#L393>
>           
> 
>          
> If so, I wonder what is the rationale for forcing the flink process memory to 
> be the same as the pod memory?
>            Is there a way to bypass that, for example by setting the desired 
> process.memory configuration differently?
> 
>          
> Thanks!
> 
> 
> 
>  <https://a.app.qq.com/o/simple.jsp?pkgname=com.sohu.mail.client.cordova>
> 
> 

Reply via email to