[
https://issues.apache.org/jira/browse/HADOOP-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12565417#action_12565417
]
Jim Kellerman commented on HADOOP-2636:
---------------------------------------
Billy Pearson - 03/Feb/08 09:56 PM:
> are we loseing the hbase.hregion.memcache.block.multiplier option in this
> patch? Before I applied the
> patch I was seeing blocking messages logged while flushes where happening but
> not after I applied
> the patch.
Yes, in this version of the patch, blocking of updates was removed because all
the memcache size accounting is done at the store level rather than the region
level.
What needs to be done is to bubble up store memcache size to the region level
so that we can implement blocking based on the largest store memcache and not
just the sum of the size of all the updates to the region.
The question is, does this patch work without the blocking or does it need to
be put back?
> [hbase] Make cache flush triggering less simplistic
> ---------------------------------------------------
>
> Key: HADOOP-2636
> URL: https://issues.apache.org/jira/browse/HADOOP-2636
> Project: Hadoop Core
> Issue Type: Improvement
> Components: contrib/hbase
> Affects Versions: 0.16.0
> Reporter: stack
> Assignee: Jim Kellerman
> Fix For: 0.17.0
>
> Attachments: patch.txt, patch.txt, patch.txt, patch.txt, patch.txt,
> patch.txt, patch.txt, patch.txt, patch.txt
>
>
> When flusher runs -- its triggered when the sum of all Stores in a Region > a
> configurable max size -- we flush all Stores though a Store memcache might
> have but a few bytes.
> I would think Stores should only dump their memcache disk if they have some
> substance.
> The problem becomes more acute, the more families you have in a Region.
> Possible behaviors would be to dump the biggest Store only, or only those
> Stores > 50% of max memcache size. Behavior would vary dependent on the
> prompt that provoked the flush. Would also log why the flush is running:
> optional or > max size.
> This issue comes out of HADOOP-2621.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.