[
https://issues.apache.org/jira/browse/HBASE-26018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Andrew Kyle Purtell updated HBASE-26018:
----------------------------------------
Fix Version/s: (was: 2.4.5)
(was: 2.3.6)
2.4.6
> Perf improvement in L1 cache
> ----------------------------
>
> Key: HBASE-26018
> URL: https://issues.apache.org/jira/browse/HBASE-26018
> Project: HBase
> Issue Type: Improvement
> Affects Versions: 3.0.0-alpha-1, 2.3.5, 2.4.4
> Reporter: Viraj Jasani
> Assignee: Viraj Jasani
> Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.6
>
> Attachments: computeIfPresent.png
>
>
> After HBASE-25698 is in, all L1 caching strategies perform buffer.retain() in
> order to maintain refCount atomically while retrieving cached blocks
> (CHM#computeIfPresent). Retaining refCount is coming up bit expensive in
> terms of performance. Using computeIfPresent API, CHM uses coarse grained
> segment locking and even if our computation is not so complex (we just call
> block retain API), it will block other update APIs for the keys within bucket
> that is locked. computeIfPresent keeps showing up on flame graphs as well
> (attached one of them). Specifically when we see aggressive cache hits for
> meta blocks (with major blocks in cache), we would want to get away from
> coarse grained locking.
> One of the suggestions came up while reviewing PR#3215 is to treat cache read
> API as optimistic read and deal with block retain based refCount issues by
> catching the respective Exception and let it treat as cache miss. This should
> allow us to go ahead with lockless get API.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)