Hi, flink users

I have some question regarding memory allocation. According to doc,
containerized.heap-cutoff-ratio means:

```
Percentage of heap space to remove from containers (YARN / Mesos), to
compensate for other JVM memory usage
```
However, I find cutoff memory is actually treated as "part of direct
memory":
https://github.com/apache/flink/blob/release-1.9.1/flink-runtime/src/test/java/org/apache/flink/runtime/clusterframework/ContaineredTaskManagerParametersTest.java#L67

The code above shows the max of MaxDirectMemorySize of jvm process is the
sum of networkbuffer + cutoff. Then, there is *no guarantee of a fixed
headroom* in container memory. In our case we use rocksDB memory and we
found many times DirectMemory is close to maximum, leaving less headroom in
the container for rocksdb. Could someone help on this? thanks

Best
Lu

Reply via email to