[
https://issues.apache.org/jira/browse/HBASE-4018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13054172#comment-13054172
]
Jonathan Gray commented on HBASE-4018:
--------------------------------------
bq. in many cases the CPU overhead dwarfs (or should) the extra RAM consumption
from uncompressing into heap space.
This is not necessarily the case. Many applications see 4-5X compression ratio
and it means being able to increase your cache capacity by that much. Some
applications can also be CPU bound, or the might be IO bound, or they might
actually be IO bound because they are RAM bound (can't fit working set in
memory). In general, it's hard to generalize here I think.
bq. Perhaps it's easily offset with a less intensive comp algorithm.
That's one of the major motivations for an hbase-specific "prefix" compression
algorithm
> Attach memcached as secondary block cache to regionserver
> ---------------------------------------------------------
>
> Key: HBASE-4018
> URL: https://issues.apache.org/jira/browse/HBASE-4018
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: Li Pi
> Assignee: Li Pi
>
> Currently, block caches are limited by heap size, which is limited by garbage
> collection times in Java.
> We can get around this by using memcached w/JNI as a secondary block cache.
> This should be faster than the linux file system's caching, and allow us to
> very quickly gain access to a high quality slab allocated cache.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira