[
https://issues.apache.org/jira/browse/HBASE-20390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16492036#comment-16492036
]
Eshcar Hillel commented on HBASE-20390:
---------------------------------------
Pushing the patch to branches resulted in an out-of-memory exception in one of
the tests that was not triggered in QA.
Debugging the test, I found out this is a classic example of the opposite case
of memory underutilisation.
Make a long story short: in test heap size is 1GB from which 400MB allocated to
memstore (defaults); 10 column families share this get 40MB each. At 0.014
in-memory flush factor store flushes a segment every ~500KB (25% utilisation of
the mslab chunk). Eventually this causes the OOME.
HBASE-20542 aims to protect against this kind of underutilisation as well.
Simply setting the factor to 0.02 in this test triggers an in-memory flush
every 750KB which apparently is sufficient for not causing OOME.
Flush to disk are also frequent in this test so there is no point in setting a
higher threshold - it will cause flushes to disk to happen before a flush
in-memory happens.
I will push a 1 line addendum to fix the test.
> IMC Default Parameters for 2.0.0
> --------------------------------
>
> Key: HBASE-20390
> URL: https://issues.apache.org/jira/browse/HBASE-20390
> Project: HBase
> Issue Type: Sub-task
> Reporter: Eshcar Hillel
> Assignee: Eshcar Hillel
> Priority: Major
> Attachments: HBASE-20390-branch-2.0-01.patch,
> HBASE-20390-branch-2.0-01.patch, HBASE-20390.branch-2.0.002.patch,
> HBASE-20390.branch-2.0.003.patch, HBase 2.0 performance evaluation -
> throughput SSD_HDD.pdf, hits.ihc.png
>
>
> Setting new default parameters for in-memory compaction based on performance
> tests done in HBASE-20188
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)