[ 
https://issues.apache.org/jira/browse/HDDS-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3630:
-----------------------------
    Description: Currently, one rocksdb for one container. one container has 
5GB capacity. 10TB data need more than 2000 rocksdb in one datanode.  It's 
difficult to limit the memory of 2000 rocksdb. So maybe we should only use one 
rocksdb for all containers.  (was: Currently, one rocksdb for one container. 
one container has 5GB capacity. 1TB data need more than 200 rocksdb in one 
datanode.  It's difficult to limit the memory of 200 rocksdb. So maybe we 
should only use one rocksdb for all containers.)

> Merge rocksdb into one in datanode
> ----------------------------------
>
>                 Key: HDDS-3630
>                 URL: https://issues.apache.org/jira/browse/HDDS-3630
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>            Reporter: runzhiwang
>            Assignee: runzhiwang
>            Priority: Major
>
> Currently, one rocksdb for one container. one container has 5GB capacity. 
> 10TB data need more than 2000 rocksdb in one datanode.  It's difficult to 
> limit the memory of 2000 rocksdb. So maybe we should only use one rocksdb for 
> all containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to