Hi Roshan

>From our experience, RocksDB memory allocation actually cannot be controlled 
>well from Flink's side.
The option containerized.heap-cutoff-ratio is mainly used to calculate JVM heap 
size, and the left part is treated as off-heap size. In perfect situation, 
RocksDB's memory should deploy in the off-heap side. However, Flink just start 
the RocksDB process and left the memory allocation to RocksDB itself. If YARN 
is enabled to check total memory usage, and the total memory usage exceed the 
limit due to RocksDB memory increased, container would be killed then.

To control RocksDB memory, I recommend you to configure an acceptable write 
buffer and block cache size, set 'cacheIndexAndFilterBlocks', 
'optimizeFilterForHits' and 'pinL0FilterAndIndexBlocksInCache' as true (the 
first one is for memory control and the latter two are for performance when we 
cache index & filter, refer to [1] for more information.) Last but not least, 
ensure not to use many states within one operator, that will cause RocksDB use 
many column families and each family consumes the specific write buffer(s).

[1] 
https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB#indexes-and-filter-blocks
[https://avatars0.githubusercontent.com/u/69631?s=400&v=4]<https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB#indexes-and-filter-blocks>

Memory usage in RocksDB · facebook/rocksdb Wiki · 
GitHub<https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB#indexes-and-filter-blocks>
A library that provides an embeddable, persistent key-value store for fast 
storage. - facebook/rocksdb
github.com


Best
Yun Tang

________________________________
From: Roshan Naik <roshan_n...@yahoo.com.INVALID>
Sent: Saturday, February 23, 2019 10:09
To: dev@flink.apache.org
Subject: Where does RocksDB's mem allocation occur

For yarn deployments,  Lets say you have  lets say the container size = 10 GB
 containerized.heap-cutoff-ratio = 0.3  ( = 3GB)
That means 7GB is available for Flinks various subsystems which include,= jvm 
heap, and all the DirectByteBufferAllocatins (netty + netw buff + .. ) and Java 
metadata.
I am wondering if RocksDB memory allocations (which is C++ native memory 
allocations) are drawn from the 3GB "cutoff" space or it will come from 
whatever is left from the remaining 7GB (i.e left after reserving for above 
mentioned pieces).
-roshan

Reply via email to