[ 
https://issues.apache.org/jira/browse/HBASE-26018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17385726#comment-17385726
 ] 

Andrew Kyle Purtell commented on HBASE-26018:
---------------------------------------------

The benefit is clear, but per the discussion on the PR and review of the code, 
we can leak from the BC if we get this wrong. Let’s get some experience and 
measurement with the change under different workloads and if if checks out 
would be good to commit back to releasing branches then. Also could motivate a 
2.5 if we keep this there and up. 

> Perf improvement in L1 cache
> ----------------------------
>
>                 Key: HBASE-26018
>                 URL: https://issues.apache.org/jira/browse/HBASE-26018
>             Project: HBase
>          Issue Type: Improvement
>    Affects Versions: 3.0.0-alpha-1, 2.3.5, 2.4.4
>            Reporter: Viraj Jasani
>            Assignee: Viraj Jasani
>            Priority: Major
>             Fix For: 2.5.0, 3.0.0-alpha-2
>
>         Attachments: computeIfPresent.png
>
>
> After HBASE-25698 is in, all L1 caching strategies perform buffer.retain() in 
> order to maintain refCount atomically while retrieving cached blocks 
> (CHM#computeIfPresent). Retaining refCount is coming up bit expensive in 
> terms of performance. Using computeIfPresent API, CHM uses coarse grained 
> segment locking and even if our computation is not so complex (we just call 
> block retain API), it will block other update APIs for the keys within bucket 
> that is locked. computeIfPresent keeps showing up on flame graphs as well 
> (attached one of them). Specifically when we see aggressive cache hits for 
> meta blocks (with major blocks in cache), we would want to get away from 
> coarse grained locking.
> One of the suggestions came up while reviewing PR#3215 is to treat cache read 
> API as optimistic read and deal with block retain based refCount issues by 
> catching the respective Exception and let it treat as cache miss. This should 
> allow us to go ahead with lockless get API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to