[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16441945#comment-16441945
 ] 

stack edited comment on HBASE-20188 at 4/18/18 6:37 AM:
--------------------------------------------------------

Report exploring the [~anoop.hbase] postulate above that 'flushes are taking 
longer' by comparing flush histories of 1.2.7 vs 2.0.0 with client batching 
enabled (2MB): 
https://docs.google.com/spreadsheets/d/1sihTxb4aCplR3Rr_GGXkPlwhMIm-CbB9j_5339AS0Zc/edit#gid=1016758826

Findings are that 1.2.7 flushes twice as often during the YCSB load phase (400 
vs 200 times) carrying less in memory riding at around 128M spiking up on 
occasion with 2.0.0 carrying ~256M on average.

I suppose this is what we'd expect when IMC is enabled.

2.0.0 does ride well above the 128MB threshold. If the CSLM is correspondingly 
large, that could be part explanation for why writes are slower in this mode in 
2.0.0 -- almost 25% slower ... with reads about 3 or 4% slower when data is 
batched in by client as here.

My YCSB graphs as it happens are too coarse; they miss most flushes so don't 
reveal what is really going on here. The attached report was made from parsing 
the log. This load phase was done with client-side buffering enabled so batches 
were coming in in 2MB lots instead of the default 1.7k which is what all 
previous YCSB runs in this JIRA were doing.


was (Author: stack):
Report comparing flush histories of 1.2.7 vs 2.0.0 with client batching enabled 
(2MB)

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Assignee: stack
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: CAM-CONFIG-V01.patch, HBASE-20188-xac.sh, 
> HBASE-20188.sh, HBase 2.0 performance evaluation - 8GB(1).pdf, HBase 2.0 
> performance evaluation - 8GB.pdf, HBase 2.0 performance evaluation - Basic vs 
> None_ system settings.pdf, ITBLL2.5B_1.2.7vs2.0.0_cpu.png, 
> ITBLL2.5B_1.2.7vs2.0.0_gctime.png, ITBLL2.5B_1.2.7vs2.0.0_iops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_load.png, ITBLL2.5B_1.2.7vs2.0.0_memheap.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memstore.png, ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, 
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, 
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, hbase-env.sh, hbase-site.xml, 
> hbase-site.xml, hits.png, lock.127.workloadc.20180402T200918Z.svg, 
> lock.2.memsize2.c.20180403T160257Z.svg, perregion.png, run_ycsb.sh, 
> total.png, tree.txt, workloadx, workloadx
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to