[
https://issues.apache.org/jira/browse/HBASE-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969917#action_12969917
]
Kannan Muthukkaruppan commented on HBASE-3327:
----------------------------------------------
I think this scheme helps more than the ICV case. For example, workloads that
mostly tend to access recent data. You still bound your recovery time by
flushing the memstores into HFiles-- but now continue to keep them around as a
"read-cache". [This scheme provides some of the benefits (granted, not all) of
doing a "scan cache" (as described in the big table paper), but with much less
implementation complexity.]
> For increment workloads, retain memstores in memory after flushing them
> -----------------------------------------------------------------------
>
> Key: HBASE-3327
> URL: https://issues.apache.org/jira/browse/HBASE-3327
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: Karthik Ranganathan
>
> This is an improvement based on our observation of what happens in an
> increment workload. The working set is typically small and is contained in
> the memstores.
> 1. The reason the memstores get flushed is because the number of wal logs
> limit gets hit.
> 2. This in turn triggers compactions, which evicts the block cache.
> 3. Flushing of memstore and eviction of the block cache causes disk reads for
> increments coming in after this because the data is no longer in memory.
> We could solve this elegantly by retaining the memstores AFTER they are
> flushed into files. This would mean we can quickly populate the new memstore
> with the working set of data from memory itself without having to hit disk.
> We can throttle the number of such memstores we retain, or the memory
> allocated to it. In fact, allocating a percentage of the block cache to this
> would give us a huge boost.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.