[ 
https://issues.apache.org/jira/browse/HBASE-27002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuesen Liang updated HBASE-27002:
---------------------------------
    Description: 
Bucket cache is no longer as a victim handler for LRU by HBASE-19357.
{noformat}
When BC is used, data blocks will be strictly on BC only where as index/bloom 
blocks are on LRU L1 cache.
{noformat}
In this situation, the LRU cache's memory is totally on heap.

If the index and bloom filters on a region server are big, then a big LRU cache 
will introduce more GC cost.

So we should add a *configuration for user* to choose victim handler.

A small LRU cache with a big victim offheap bucket cache can reduce GC cost.

 

For example: a region server has 5GB index, 80 GB bloom filter, and 256GB DRAM, 
its configurations can be as follow:
{code:xml}
export HBASE_REGIONSERVER_OPTS="-Xms40g -Xmx40g -XX:MaxDirectMemorySize=180g"
{code}

{code:xml}
<property>
    <name>hbase.blockcache.victim.handler</name>
    <value>true</value>
</property>  

<property>
    <name>hfile.block.cache.size</name>
    <value>0.3</value>
</property> 

<property>
    <name>hbase.bucketcache.ioengine</name>
    <value>offheap</value>
</property>
<property>
    <name>hbase.bucketcache.size</name>
    <value>160000</value>
</property>
{code}

  was:
Bucket cache is no longer as a victim handler for LRU by HBASE-19357.
{noformat}
When BC is used, data blocks will be strictly on BC only where as index/bloom 
blocks are on LRU L1 cache.
{noformat}
In this situation, the LRU cache's memory is totally on heap.

If the index and bloom filters on a region server are big, then a big LRU cache 
will introduce more GC cost.

So we should add a *configuration for user* to choose victim handler.

A small LRU cache with a big victim offheap bucket cache can reduce GC cost.

 

For example: a region server has 5GB index, 80 GB bloom filter, and 256GB DRAM, 
its configurations can be as follow:
{code:xml}
export HBASE_REGIONSERVER_OPTS="-Xms40g -Xmx40g -XX:MaxDirectMemorySize=180g"
{code}

{code:xml}
<property>
    <name>hbase.bucketcache.as.lrucache.victim</name>
    <value>true</value>
</property>  

<property>
    <name>hfile.block.cache.size</name>
    <value>0.3</value>
</property> 

<property>
    <name>hbase.bucketcache.ioengine</name>
    <value>offheap</value>
</property>
<property>
    <name>hbase.bucketcache.size</name>
    <value>160000</value>
</property>
{code}


> Config BucketCache as victim of LRUCache
> ----------------------------------------
>
>                 Key: HBASE-27002
>                 URL: https://issues.apache.org/jira/browse/HBASE-27002
>             Project: HBase
>          Issue Type: Improvement
>          Components: BlockCache, BucketCache
>            Reporter: Xuesen Liang
>            Assignee: Xuesen Liang
>            Priority: Major
>
> Bucket cache is no longer as a victim handler for LRU by HBASE-19357.
> {noformat}
> When BC is used, data blocks will be strictly on BC only where as index/bloom 
> blocks are on LRU L1 cache.
> {noformat}
> In this situation, the LRU cache's memory is totally on heap.
> If the index and bloom filters on a region server are big, then a big LRU 
> cache will introduce more GC cost.
> So we should add a *configuration for user* to choose victim handler.
> A small LRU cache with a big victim offheap bucket cache can reduce GC cost.
>  
> For example: a region server has 5GB index, 80 GB bloom filter, and 256GB 
> DRAM, its configurations can be as follow:
> {code:xml}
> export HBASE_REGIONSERVER_OPTS="-Xms40g -Xmx40g -XX:MaxDirectMemorySize=180g"
> {code}
> {code:xml}
> <property>
>     <name>hbase.blockcache.victim.handler</name>
>     <value>true</value>
> </property>  
> <property>
>     <name>hfile.block.cache.size</name>
>     <value>0.3</value>
> </property> 
> <property>
>     <name>hbase.bucketcache.ioengine</name>
>     <value>offheap</value>
> </property>
> <property>
>     <name>hbase.bucketcache.size</name>
>     <value>160000</value>
> </property>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to