[
https://issues.apache.org/jira/browse/HADOOP-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12564274#action_12564274
]
Billy Pearson commented on HADOOP-2636:
---------------------------------------
I checked the 4th patch above just to see if it fixed my problems but it did
not here a little update on what i am seeing
The flushing happens correctly the first flush of the server once it started.
then it flushes as fast as it can over and over.
but if I stop the job the flushes stop at the same time with in a sec
But once a job starts again the flushes start almost at the same time its like
the flush size has been reset to 0 or something after the first flush.
> [hbase] Make cache flush triggering less simplistic
> ---------------------------------------------------
>
> Key: HADOOP-2636
> URL: https://issues.apache.org/jira/browse/HADOOP-2636
> Project: Hadoop Core
> Issue Type: Improvement
> Components: contrib/hbase
> Affects Versions: 0.16.0
> Reporter: stack
> Assignee: Jim Kellerman
> Fix For: 0.17.0
>
> Attachments: patch.txt, patch.txt, patch.txt, patch.txt
>
>
> When flusher runs -- its triggered when the sum of all Stores in a Region > a
> configurable max size -- we flush all Stores though a Store memcache might
> have but a few bytes.
> I would think Stores should only dump their memcache disk if they have some
> substance.
> The problem becomes more acute, the more families you have in a Region.
> Possible behaviors would be to dump the biggest Store only, or only those
> Stores > 50% of max memcache size. Behavior would vary dependent on the
> prompt that provoked the flush. Would also log why the flush is running:
> optional or > max size.
> This issue comes out of HADOOP-2621.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.