[
https://issues.apache.org/jira/browse/PHOENIX-1304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172674#comment-15172674
]
Lars Hofhansl commented on PHOENIX-1304:
----------------------------------------
I did some perf testing and reported on my blog:
http://hadoop-hbase.blogspot.com/2016/02/hbase-compression-vs-blockencoding_17.html
The main outcome is that for scans the block cache almost never helps. We can
do a very simple logic: If the query will likely touch more than N bytes,
disable the HBase caching. N can be small such as 1GB, even just 1MB is fine.
The block cache is mostly good for point gets, or very small scans, where the
seek time (disk + HBase) would be significant compared to throughput.
> Auto-detect if we should pass the NO_CACHE hint
> -----------------------------------------------
>
> Key: PHOENIX-1304
> URL: https://issues.apache.org/jira/browse/PHOENIX-1304
> Project: Phoenix
> Issue Type: Improvement
> Reporter: Lars Hofhansl
> Assignee: Samarth Jain
> Priority: Minor
> Attachments: wip.patch
>
>
> Most databases by default avoid filling the block cache during full scans.
> Typically either stats are consulted to decide whether a full scan should
> fill the blockcache, or a subset of the block cache is dedicated to full scan
> using the cache like a ring buffer.
> We already have the "NO_CACHE" hint, but we can do better.
> In Phoenix we could detect scans that neither use any parts of the key nor
> any indexes and then optionally:
> # avoid using the blockcache
> # throw a "slow query" exception (this is especially useful for large data
> set, where we'd rather fail than go into a nirvana for an hour)
> (both configurable - either globally or per table or connection or query)
> Skip scans represent an interesting middle ground. If we skip many blocks
> between rows we'd definitely benefit from the blockcache, if not we have a
> case similar to a full scan.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)