[ 
https://issues.apache.org/jira/browse/HBASE-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16073265#comment-16073265
 ] 

Eshcar Hillel commented on HBASE-18294:
---------------------------------------

bq. We should change the 128 MB flush size default value then?
This is an orthogonal discussion. I don't have a strong opinion on whether it 
should be higher or lower. Plus the user has the ability to control this 
parameter. The discussion here is what this parameter is limiting data size or 
heap size.
bq. When to be flushed, flushing the one with max data size in it make sense no?
For me flushing the one with max heap space makes more sense.
bq. Any way after the compacting memstore, we are reducing the heap overhead in 
btw by in memory flushes.
For compacting memstore the per-cell metadata overhead is about 60B, which can 
be 50% of the heap size for 60B cells and 10% of the heap for 600B cells. So 
there are cases where the metadata is not negligible.

> Flush is based on data size instead of heap size
> ------------------------------------------------
>
>                 Key: HBASE-18294
>                 URL: https://issues.apache.org/jira/browse/HBASE-18294
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Eshcar Hillel
>            Assignee: Eshcar Hillel
>
> A region is flushed if its memory component exceed a threshold (default size 
> is 128MB).
> A flush policy decides whether to flush a store by comparing the size of the 
> store to another threshold (that can be configured with 
> hbase.hregion.percolumnfamilyflush.size.lower.bound).
> Currently the implementation (in both cases) compares the data size 
> (key-value only) to the threshold where it should compare the heap size 
> (which includes index size, and metadata).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to