[
https://issues.apache.org/jira/browse/HDDS-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17136014#comment-17136014
]
Jitendra Nath Pandey commented on HDDS-3630:
--------------------------------------------
bq. The capacity of container is 5GB. If container is full, the off-heap memory
ofRocksDB is about 15MB.
The off-heap memory requirement will depend on the number of entries in the
rocksdb, i.e. the number of blocks. Suppose we have 10MB blocks we will have
only 500 entries in rocksdb. How much off-heap memory is needed for 500
entries? Of course it will vary if average block size grows up and down.
> Merge rocksdb in datanode
> -------------------------
>
> Key: HDDS-3630
> URL: https://issues.apache.org/jira/browse/HDDS-3630
> Project: Hadoop Distributed Data Store
> Issue Type: Sub-task
> Reporter: runzhiwang
> Assignee: runzhiwang
> Priority: Major
> Attachments: Merge RocksDB in Datanode-v1.pdf, Merge RocksDB in
> Datanode-v2.pdf
>
>
> Currently, one rocksdb for one container. one container has 5GB capacity.
> 10TB data need more than 2000 rocksdb in one datanode. It's difficult to
> limit the memory of 2000 rocksdb. So maybe we should limited instance of
> rocksdb for each disk.
> The design of improvement is in the follow link, but still is a draft.
> TODO:
> 1. compatibility with current logic i.e. one rocksdb for each container
> 2. measure the memory usage before and after improvement
> 3. effect on efficiency of read and write.
> https://docs.google.com/document/d/18Ybg-NjyU602c-MYXaJHP6yrg-dVMZKGyoK5C_pp1mM/edit#
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]