[
https://issues.apache.org/jira/browse/HBASE-19153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16235688#comment-16235688
]
Anoop Sam John commented on HBASE-19153:
----------------------------------------
What is the issue? When the block's size > maxBlockSize, we want to return
with out caching to the LRU cache. The extra check is to limit the WARN logs.
Not logging for every occasion but only at every 50th (means 2% of total)
if (stats.failInsert() % 50 != 0)
{ return; }
This means not actually cache at 2% times.. That is not what we want
> LruBlockCache cache too big blocks logic error
> ----------------------------------------------
>
> Key: HBASE-19153
> URL: https://issues.apache.org/jira/browse/HBASE-19153
> Project: HBase
> Issue Type: Bug
> Components: BlockCache
> Affects Versions: 2.0.0-alpha-3
> Reporter: Zhang Quanjin
>
> The latest version of LruBolckCache, I found the code logic of cache too big
> bolcks is inconsistent with annotation.
> If follow the notes, the code should look like this:
> if (buf.heapSize() > maxBlockSize) {
> // If there are a lot of blocks that are too
> // big this can make the logs way too noisy.
> // So we log 2%
> if (stats.failInsert() % 50 != 0) {
> return;
> }
> LOG.warn("Trying to cache too large a block "
> + cacheKey.getHfileName() + " @ "
> + cacheKey.getOffset()
> + " is " + buf.heapSize()
> + " which is larger than " + maxBlockSize);
>
> }
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)