[ 
https://issues.apache.org/jira/browse/HBASE-19357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275045#comment-16275045
 ] 

Chia-Ping Tsai commented on HBASE-19357:
----------------------------------------

The {{ColumnFamilyDescriptor}} is new class in 2.0 so the deprecation cycle may 
be unnecessary. ditto for {{ColumnFamilyDescriptorBuilder}}.

Our docs has reference to {{hbase.bucketcache.combinedcache.enabled}}. Could we 
add some comment to remind user about this change?



> Bucket cache no longer L2 for LRU cache
> ---------------------------------------
>
>                 Key: HBASE-19357
>                 URL: https://issues.apache.org/jira/browse/HBASE-19357
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Anoop Sam John
>            Assignee: Anoop Sam John
>             Fix For: 2.0.0-beta-1
>
>         Attachments: HBASE-19357.patch, HBASE-19357.patch, 
> HBASE-19357_V2.patch
>
>
> When Bucket cache is used, by default we dont configure it as an L2 cache 
> alone. The default setting is combined mode ON where the data blocks to 
> Bucket cache and index/bloom blocks go to LRU cache. But there is a way to 
> turn this off and make LRU as L1 and Bucket cache as a victim handler for L1. 
> It will be just L2.   
> After the off heap read path optimization Bucket cache is no longer slower 
> compared to L1. We have test results on data sizes from 12 GB.  The Alibaba 
> use case was also with 12 GB and they have observed a ~30% QPS improve over 
> the LRU cache.
> This issue is to remove the option for combined mode = false. So when Bucket 
> cache is in use, data blocks will go to it only and LRU will get only index 
> /meta/bloom blocks.   Bucket cache will no longer be configured as a victim 
> handler for LRU.
> Note : WHen external cache is in use, there only the L1 L2 thing comes. LRU 
> will be L1 and external cache act as its L2. That make full sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to