[
https://issues.apache.org/jira/browse/HBASE-69?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jim Kellerman updated HBASE-69:
-------------------------------
Attachment: patch.txt
Latest patch. OOMEs during PerformanceEvaluation sequential write test, which
doesn't make sense since sequential write should have at most two regions open
and cached ~128MB of data in memory. Since this patch cannot handle this simple
case,I don't think it should be committed.
We can salvage the changes to HMaster and Migrate since those are needed, but
otherwise, I think this effort should be shut down and tossed.
I have spent too much time on it and it does not bring enough to the table to
be considered for inclusion.
> [hbase] Make cache flush triggering less simplistic
> ---------------------------------------------------
>
> Key: HBASE-69
> URL: https://issues.apache.org/jira/browse/HBASE-69
> Project: Hadoop HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: stack
> Assignee: Jim Kellerman
> Fix For: 0.2.0
>
> Attachments: patch.txt, patch.txt, patch.txt, patch.txt, patch.txt,
> patch.txt, patch.txt, patch.txt, patch.txt, patch.txt, patch.txt, patch.txt,
> patch.txt, patch.txt
>
>
> When flusher runs -- its triggered when the sum of all Stores in a Region > a
> configurable max size -- we flush all Stores though a Store memcache might
> have but a few bytes.
> I would think Stores should only dump their memcache disk if they have some
> substance.
> The problem becomes more acute, the more families you have in a Region.
> Possible behaviors would be to dump the biggest Store only, or only those
> Stores > 50% of max memcache size. Behavior would vary dependent on the
> prompt that provoked the flush. Would also log why the flush is running:
> optional or > max size.
> This issue comes out of HADOOP-2621.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.