[ 
https://issues.apache.org/jira/browse/HBASE-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13805482#comment-13805482
 ] 

Jean-Daniel Cryans commented on HBASE-9840:
-------------------------------------------

bq. singleFactor = 0.25, multiFactor = 0.5 , memoryFactor = 0.25 - are all 
hard-coded. They must be configurable.

Meh, +0, see the following.

bq. Default values are probably not optimal at all. If it is attempt to mimic 
LRU2Q cache than optimal split for first insert is closer to 0.75 (I think I 
read it on Facebook engineering)

>From its javadoc:

{quote}
Each priority will retain close to its maximum size, however, if any priority 
is not using its entire chunk the others are able to grow beyond their chunk 
size.
{quote}

So if you aren't using the in-memory part, you effectively dedicate 75% of your 
BC to the multi priority.

bq. Eviction does not follow LRU2Q at all

2Q suffers from different problems, mainly the tuning of Kin and Kout.

The "best" page replacement policy is one that takes into account age and 
frequency of access, but we can't use ARC.

> Large scans and BlockCache evictions problems
> ---------------------------------------------
>
>                 Key: HBASE-9840
>                 URL: https://issues.apache.org/jira/browse/HBASE-9840
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>
> I just ran into a scenario that baffled me first, but after some reflection 
> makes sense. I ran a very large scan that filled up most of the block cache 
> with my scan's data. I ran that scan a few times.
> That I ran a much smaller scan, and this scan will never get all its blocks 
> cached if it does not fit entirely into the remaining BlockCache; regardless 
> how I often I run it!
> The reason is that the blocks of the first large scan were all promoted. 
> Since the 2nd scan did not fully fit into the cache all blocks are 
> round-robin evicted as I rerun the scan. Thus those blocks will never get 
> accessed more than once before they get evicted again.
> Since promoted blocks are not demoted the large scan's block will never be 
> evicted unless we have another small enough scan/get that can promote its 
> blocks.
> Not sure what the proper solution is, but it seems only a LRU cache that can 
> expire blocks over time would solve this.
> Granted, this is a pretty special case.
> Edit: My usual spelling digressions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to