Hi Lu, 

I found this talk on last Flink Forward in Berlin very helpful in order to 
understand JVM RAM and cutoff memory [1]. Maybe it helps you understand that 
stuff better. 
In my experiences on YARN, the author was totally correct. I was able to 
reproduce that by assigning something about 12GB for taskmanager memory, 
setting cutoff to 0.15 and let network stay on 0.1 , I got something round 
about 8-9GB RAM for my real taskmanager JVM memory. 

Best regards 
Theo 

[1] [ https://www.youtube.com/watch?v=aq1Whga-RJ4 | 
https://www.youtube.com/watch?v=aq1Whga-RJ4 ] 


Von: "Lu Niu" <qqib...@gmail.com> 
An: "user" <user@flink.apache.org> 
Gesendet: Dienstag, 10. Dezember 2019 22:58:01 
Betreff: Help to Understand cutoff memory 

Hi, flink users 

I have some question regarding memory allocation. According to doc, 
containerized.heap-cutoff-ratio means: 

``` 
Percentage of heap space to remove from containers (YARN / Mesos), to 
compensate for other JVM memory usage 
``` 
However, I find cutoff memory is actually treated as "part of direct memory": 
[ 
https://github.com/apache/flink/blob/release-1.9.1/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/ContaineredTaskManagerParametersTest.java#L67
 | 
https://github.com/apache/flink/blob/release-1.9.1/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/ContaineredTaskManagerParametersTest.java#L67
 ] 

The code above shows the max of MaxDirectMemorySize of jvm process is the sum 
of networkbuffer + cutoff. Then, there is no guarantee of a fixed headroom in 
container memory. In our case we use rocksDB memory and we found many times 
DirectMemory is close to maximum, leaving less headroom in the container for 
rocksdb. Could someone help on this? thanks 

Best 
Lu 

Reply via email to