[jira] [Updated] (HDDS-3630) Merge rocksdb into one in datanode

2020-05-22 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3630:
-
Description: Currently, one rocksdb for one container. one container has 
5GB capacity. 10TB data need more than 2000 rocksdb in one datanode.  It's 
difficult to limit the memory of 2000 rocksdb. So maybe we should use one 
rocksdb for each disk.  (was: Currently, one rocksdb for one container. one 
container has 5GB capacity. 10TB data need more than 2000 rocksdb in one 
datanode.  It's difficult to limit the memory of 2000 rocksdb. So maybe we 
should only use one rocksdb for all containers.)

> Merge rocksdb into one in datanode
> --
>
> Key: HDDS-3630
> URL: https://issues.apache.org/jira/browse/HDDS-3630
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>
> Currently, one rocksdb for one container. one container has 5GB capacity. 
> 10TB data need more than 2000 rocksdb in one datanode.  It's difficult to 
> limit the memory of 2000 rocksdb. So maybe we should use one rocksdb for each 
> disk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3630) Merge rocksdb into one in datanode

2020-05-20 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3630:
-
Description: Currently, one rocksdb for one container. one container has 
5GB capacity. 10TB data need more than 2000 rocksdb in one datanode.  It's 
difficult to limit the memory of 2000 rocksdb. So maybe we should only use one 
rocksdb for all containers.  (was: Currently, one rocksdb for one container. 
one container has 5GB capacity. 1TB data need more than 200 rocksdb in one 
datanode.  It's difficult to limit the memory of 200 rocksdb. So maybe we 
should only use one rocksdb for all containers.)

> Merge rocksdb into one in datanode
> --
>
> Key: HDDS-3630
> URL: https://issues.apache.org/jira/browse/HDDS-3630
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>
> Currently, one rocksdb for one container. one container has 5GB capacity. 
> 10TB data need more than 2000 rocksdb in one datanode.  It's difficult to 
> limit the memory of 2000 rocksdb. So maybe we should only use one rocksdb for 
> all containers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org