[
https://issues.apache.org/jira/browse/HBASE-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497544#comment-14497544
]
Lars Hofhansl commented on HBASE-13472:
---------------------------------------
That'll be interesting. We kinda have to measure the likelihood of some block
ending up in the cache. If there's enough heap, blocks will always end up in
the block cache (IN_MEMORY or not).
> Polish IN_MEMORY table behavior
> -------------------------------
>
> Key: HBASE-13472
> URL: https://issues.apache.org/jira/browse/HBASE-13472
> Project: HBase
> Issue Type: Task
> Reporter: Andrew Purtell
>
> For a long time we've been able to support a mode of operation that keeps as
> much table data as possible in memory, so HBase can be used as an 'in-memory'
> DB with fully durable WAL and write-behind persistence of table data. However:
> - There are a set of relevant schema options (IN_MEMORY, CACHE_ON_WRITE,
> PREFETCH_BLOCKS_ON_OPEN, block encoding), so set up isn't simple. We should
> have a shortcut that sets all this up in one place. I'm thinking a utility
> class with static helpers that configure a table descriptor with all of the
> needed bits. (Other ideas?)
> - We don't have a safety valve. An in-memory table can become too large,
> where it falls out of blockcache and performs poorly without warning because
> it's become too big. Consider table quota support with an option for region
> size limits as % of total heap consumed by regions for a given table. Warn at
> soft limit. Refuse writes if over hard limit.
> Follow on work can investigate options hooking up to offheap work. That's not
> in scope here.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)