[
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16958725#comment-16958725
]
shengjk1 commented on FLINK-7289:
---------------------------------
[~yunta] I don't know how to get the LOG file rockdb,
this is my code :
{code:java}
class BackendOptions implements ConfigurableOptionsFactory {
protected final static org.slf4j.Logger logger =
LoggerFactory.getLogger(Main.class);
//
private static final WriteBufferManager writeBufferManager = new
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
@Override
public DBOptions createDBOptions(DBOptions currentOptions) {
return
currentOptions.setInfoLogLevel(InfoLogLevel.INFO_LEVEL).setLogger(new
Logger(currentOptions) {
@Override
protected void log(InfoLogLevel infoLogLevel, String s) {
System.out.println("rockdb=============="+s);
logger.debug("rockdb =========={}",s);
}
}).setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
}
@Override
public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions
currentOptions) {
return currentOptions
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)
.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)
// increases read amplification but decreases memory
useage and space amplification
.setBlockSize(4 * 32 * 1024))
;
}
@Override
public OptionsFactory configure(Configuration configuration) {
return this;
}
}
{code}
This is my cli:
{code:java}
-p 3 -ytm 2048M -ytm early -m yarn-cluster -yD
"state.backend.rocksdb.ttl.compaction.filter.enabled=true"
{code}
> Memory allocation of RocksDB can be problematic in container environments
> -------------------------------------------------------------------------
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
> Issue Type: Improvement
> Components: Runtime / State Backends
> Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
> Reporter: Stefan Richter
> Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of
> allocated memory by RocksDB is not under the control of Flink or the JVM and
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can
> exceed the memory budget of the container, and the process will get killed.
> Currently, there is no other option than trusting RocksDB to be well behaved
> and to follow its memory configurations. However, limiting RocksDB's memory
> usage is not as easy as setting a single limit parameter. The memory limit is
> determined by an interplay of several configuration parameters, which is
> almost impossible to get right for users. Even worse, multiple RocksDB
> instances can run inside the same process and make reasoning about the
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)