[
https://issues.apache.org/jira/browse/HBASE-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037824#comment-14037824
]
stack commented on HBASE-11331:
-------------------------------
How feasible keeping count of how many times a block has been decompressed and
if over a configurable threshold, instead shove the decompressed block back
into the block cache in place of the compressed one? We already count if been
accessed more than once? Could we leverage this fact?
bq. This is related to but less invasive than HBASE-8894.
Would a better characterization be that this is a core piece of HBASE-8894 only
done more in line w/ how hbase master branch works now (HBASE-8894 interjects a
special-case handling of its L2 cache when reading blocks from HDFS... This
makes do without special interjection).
> [blockcache] lazy block decompression
> -------------------------------------
>
> Key: HBASE-11331
> URL: https://issues.apache.org/jira/browse/HBASE-11331
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: Nick Dimiduk
> Assignee: Nick Dimiduk
> Attachments: HBASE-11331.00.patch
>
>
> Maintaining data in its compressed form in the block cache will greatly
> increase our effective blockcache size and should show a meaning improvement
> in cache hit rates in well designed applications. The idea here is to lazily
> decompress/decrypt blocks when they're consumed, rather than as soon as
> they're pulled off of disk.
> This is related to but less invasive than HBASE-8894.
--
This message was sent by Atlassian JIRA
(v6.2#6252)