[
https://issues.apache.org/jira/browse/HBASE-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15313560#comment-15313560
]
Enis Soztutar commented on HBASE-15950:
---------------------------------------
Yes here is what I am doing:
{code}
memstore KV size=915 M (This is KV objects, excluding the backing
array)
{code}
hacked code:
{code}
size += store.add(cell);
long estTotalKVSize = CellUtil.estimatedHeapSizeOf(cell);
long estKVBytesSize = ClassSize.align(((KeyValue)cell).getLength());
memstoreKVSize.addAndGet(estTotalKVSize - estKVBytesSize);
memstoreKVBytesSize.addAndGet(estKVBytesSize);
memstoreCSLMSize.addAndGet(ClassSize.align(ClassSize.CONCURRENT_SKIPLISTMAP_ENTRY));
{code}
> We are grossly overestimating the memstore size
> -----------------------------------------------
>
> Key: HBASE-15950
> URL: https://issues.apache.org/jira/browse/HBASE-15950
> Project: HBase
> Issue Type: Bug
> Reporter: Enis Soztutar
> Fix For: 2.0.0
>
> Attachments: Screen Shot 2016-06-02 at 8.48.27 PM.png
>
>
> While testing something else, I was loading a region with a lot of data.
> Writing 30M cells in 1M rows, with 1 byte values.
> The memstore size turned out to be estimated as 4.5GB, while with the JFR
> profiling I can see that we are using 2.8GB for all the objects in the
> memstore (KV + KV byte[] + CSLM.Node + CSLM.Index).
> This obviously means that there is room in the write cache that we are not
> effectively using.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)