[
https://issues.apache.org/jira/browse/HDDS-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953232#comment-16953232
]
Siddharth Wagle edited comment on HDDS-2283 at 10/16/19 10:22 PM:
------------------------------------------------------------------
[~aengineer] Yes the follow-up Jira will not be blindly taken up without
figuring out if 10s of RocksDBs sharing a disk vs 1 RocksDB with 10 tables in a
single RocksDB which one is better/worse.
I took this up as a low hanging fruit, agree with not focusing on
micro-benchmarks comment. This was just a curiosity / exploratory effort from
me that took all of 20 mins including the fix so went ahead with the patch.
was (Author: swagle):
[~aengineer] Yes the follow-up Jira will not be blindly taken up without
figuring out if 10s of RocksDBs sharing a disk vs 1 RocksDB with 10 tables in a
single RocksDB which one is better/worse.
I took this up as a low hanging fruit, agree with not focusing on
micro-benchmarks comment. This was just a curiosity / exploratory effort from
me that took all of 20 mins including the fix.
> Container Creation on datanodes take around 300ms due to rocksdb creation
> -------------------------------------------------------------------------
>
> Key: HDDS-2283
> URL: https://issues.apache.org/jira/browse/HDDS-2283
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Datanode
> Reporter: Mukul Kumar Singh
> Assignee: Siddharth Wagle
> Priority: Major
> Labels: pull-request-available
> Attachments: HDDS-2283.00.patch
>
> Time Spent: 10m
> Remaining Estimate: 0h
>
> Container Creation on datanodes take around 300ms due to rocksdb creation.
> Rocksdb creation is taking a considerable time and this needs to be optimized.
> Creating a rocksdb per disk should be enough and each container can be table
> inside the rocksdb.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]