[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16420842#comment-16420842
 ] 

Anastasia Braginsky commented on HBASE-20188:
---------------------------------------------

[~stack], thanks again! We are still in progress of setting our testing 
environment.

31GB is a lot of heap. Is it that you start with single region and let it split 
later? If not too much regions are used, there is a lot of memory for GC "free" 
use. No GC problems, means less advantage for CompactingMemStore, because this 
structure also decrease GC compared to CSLM. But anyway CompactingMemStore 
shouldn't cause performance degradation, thus it will be interesting to see 
your results, looking forward.
{quote}While CMS is default GC, we can't turn off MSLAB (See HBASE-3455 "Add 
memstore-local allocation buffers to combat heap fragmentation in the region 
server."). When G1GC, its possible we could do w/o MSLAB but would need to do 
the long-running tests described in HBASE-3455. MSLAB on/off is a little 
orthogonal. I did it just because [~eshcar] suggested she did it in her test 
runs.
{quote}
I understand your point. If you are sure CSM is the default GC for HBase 2.0 we 
should change our settings accordingly. However it is quite important which GC 
is going to be used in HBase2.0. We did our performance evaluation with G1GC. 
If CMS is the default GC for HBase2.0 I would definitely suggest to try 
CellArrayMap (CAM) as default with MSLAB and not CellChunkMap (CCM) as it is 
now. 

It is also important to understand whether MSLAB is going to be off or on by 
default. IMHO MSLAB is not orthogonal to CompactingMemStore, at least because 
CompactingMemStore aims (also) to decrease GC interference and it is important 
*which* GC is on. And I consider MSLAB also to be "part of GC" as it is about 
memory management, at the end of the day.

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: ITBLL2.5B_1.2.7vs2.0.0_cpu.png, 
> ITBLL2.5B_1.2.7vs2.0.0_gctime.png, ITBLL2.5B_1.2.7vs2.0.0_iops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_load.png, ITBLL2.5B_1.2.7vs2.0.0_memheap.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memstore.png, ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, 
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, 
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to