[
https://issues.apache.org/jira/browse/HDDS-7182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17623762#comment-17623762
]
Attila Doroszlai commented on HDDS-7182:
----------------------------------------
Reopened for cherry-picking to 1.3.0.
> Add property to control RocksDB max open files
> ----------------------------------------------
>
> Key: HDDS-7182
> URL: https://issues.apache.org/jira/browse/HDDS-7182
> Project: Apache Ozone
> Issue Type: Sub-task
> Reporter: Sammi Chen
> Assignee: Sammi Chen
> Priority: Major
> Labels: pull-request-available
> Fix For: 1.4.0
>
>
> *max_open_files,* as described in RocksDB document
> [https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide,]
> RocksDB keeps all file descriptors in a table cache. If number of file
> descriptors exceeds max_open_files, some files are evicted from table cache
> and their file descriptors closed. This means that every read must go through
> the table cache to lookup the file needed. Set max_open_files to -1 to always
> keep all files open, which avoids expensive table cache calls.
> The default value is -1 which means on limit. But on OS level, the open file
> handlers of each process is actually limited. If there is too many sst files
> in RocksDB, there probably will be failure like "Too many open files" in
> some extreme cases.
> Change the default value from -1 to 1024.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]