[
https://issues.apache.org/jira/browse/HDDS-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Duong updated HDDS-8765:
------------------------
Description:
h2. Problem
Today, OM manages resource (volume, bucket) locks by LockManager, a component
enclosing a lock table as a ConcurrentHashMap to store all active locks. Locks
are dynamically allocated and destroyed in the lock table based on runtime
needs. This means, for every lock allocated, a usage count is kept up to date
to decide when the lock is no longer referenced.
The current performance of LockManager is limited by the cost of maintaining
individual lock liveness, aka, counting how many concurrent usages to a lock
and removing it from the lock table when it's no longer used.
This cost mainly incurs from the need to *synchronize* all the concurrent
access to every lock (or technically, a ConcurrentHashMap section) when:
# When getting the lock to obtain: create a lock object if it's not existing
in the table and increase the lock usage count.
# When releasing the lock: decrease the usage count and remove the lock when
the usage count is 0.
!Screenshot 2023-06-05 at 4.31.14 PM.png|width=764,height=243!
This synchronization is done internally inside ConcurrentHashMap's two methods:
_compute_ and {_}computeIfPresent{_}.
This synchronization creates a bottleneck when multiple threads try to obtain
and release the same lock, even for read locks.
h2. Experiment
I did an experiment of pure OM key reads in the same buckets with 100 reader
threads.
h2. Proposed solution
was:
h2. Problem
Today, OM manages resource (volume, bucket) locks by LockManager, a component
enclosing a lock table as a ConcurrentHashMap to store all active locks. Locks
are dynamically allocated and destroyed in the lock table based on runtime
needs. This means, for every lock allocated, a usage count is kept up to date
to decide when the lock is no longer referenced.
The current performance of LockManager is limited by the cost of maintaining
individual lock liveness, aka, counting how many concurrent usages to a lock
and removing it from the lock table when it's no longer used.
This cost mainly incurs from the need to *synchronize* all the concurrent
access to every lock (or technically, a ConcurrentHashMap section) when:
# When getting the lock to obtain: create a lock object if it's not existing
in the table and increase the lock usage count.
# When releasing the lock: decrease the usage count and remove the lock when
the usage count is 0.
!Screenshot 2023-06-05 at 4.31.14 PM.png|width=764,height=243!
This synchronization is done internally inside ConcurrentHashMap's two methods:
_compute_ and {_}computeIfPresent{_}.
This synchronization creates a bottleneck when multiple threads try to obtain
and release the same lock, even for read locks.
h2. Experiment
I did an experiment reading with
h2. Proposed solution
> OM lock performance improvement
> -------------------------------
>
> Key: HDDS-8765
> URL: https://issues.apache.org/jira/browse/HDDS-8765
> Project: Apache Ozone
> Issue Type: Improvement
> Reporter: Duong
> Priority: Major
> Attachments: Screenshot 2023-06-05 at 4.31.14 PM.png
>
>
> h2. Problem
> Today, OM manages resource (volume, bucket) locks by LockManager, a component
> enclosing a lock table as a ConcurrentHashMap to store all active locks.
> Locks are dynamically allocated and destroyed in the lock table based on
> runtime needs. This means, for every lock allocated, a usage count is kept up
> to date to decide when the lock is no longer referenced.
> The current performance of LockManager is limited by the cost of maintaining
> individual lock liveness, aka, counting how many concurrent usages to a lock
> and removing it from the lock table when it's no longer used.
> This cost mainly incurs from the need to *synchronize* all the concurrent
> access to every lock (or technically, a ConcurrentHashMap section) when:
> # When getting the lock to obtain: create a lock object if it's not existing
> in the table and increase the lock usage count.
> # When releasing the lock: decrease the usage count and remove the lock when
> the usage count is 0.
> !Screenshot 2023-06-05 at 4.31.14 PM.png|width=764,height=243!
> This synchronization is done internally inside ConcurrentHashMap's two
> methods: _compute_ and {_}computeIfPresent{_}.
> This synchronization creates a bottleneck when multiple threads try to obtain
> and release the same lock, even for read locks.
> h2. Experiment
> I did an experiment of pure OM key reads in the same buckets with 100 reader
> threads.
> h2. Proposed solution
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]