[
https://issues.apache.org/jira/browse/HDDS-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16954367#comment-16954367
]
Rajesh Balamohan commented on HDDS-2324:
----------------------------------------
Attached lock profiler output for reader benchmark; and reader benchmark
clubbed with couple of writer threads. In the former, locking is not even
captured in the output as there is no contention. In later case, significant
contention can be seen in read/write locks.
> Enhance locking mechanism in OzoneMangaer
> -----------------------------------------
>
> Key: HDDS-2324
> URL: https://issues.apache.org/jira/browse/HDDS-2324
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Components: Ozone Manager
> Reporter: Rajesh Balamohan
> Priority: Major
> Attachments: om_lock_100_percent_read_benchmark.svg,
> om_lock_reader_and_writer_workload.svg
>
>
> OM has reentrant RW lock. With 100% read or 100% write benchmarks, it works
> out reasonably fine. There is already a ticket to optimize the write codepath
> (as it incurs reading from DB for key checks).
> However, when small amount of write workload (e.g 3-5 threads) is added to
> the running read benchmark, throughput suffers significantly. This is due to
> the fact that the reader threads would get blocked often. I have observed
> around 10x slower throughput (i.e 100% read benchmark was running at 12,000
> TPS and with couple of writer threads added to it, it goes down to 1200-1800
> TPS).
> 1. Instead of single write lock, one option could be good to scale out the
> write lock depending on the number of cores available in the system and
> acquire relevant lock by hashing the key.
> 2. Another option is to explore if we can make use of StampedLocks of JDK
> 8.x, which scales well when multiple readers and writers are there. But it is
> not a reentrant lock. So need to explore whether it can be an option or not.
>
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]