[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16422735#comment-16422735
 ] 

stack commented on HBASE-20188:
-------------------------------

bq. Any news with the directions you suggested Stack?

None really. Tried with data cached thinking it i/o that was responsible for 
the difference, but while it brings us closer, we are still down from 
hbase1.2.7 (52221.77 ops/second vs 33839.57). [~ram_krish] observes that we go 
faster when we use the hbase20 client for some reason (45049.10) but not fast 
enough and I'm thinking that whatever this client-side difference is, it'd 
probably make 1.2.7 go faster too (can't run hbase2 client against hbase1). Ram 
is looking into what difference is on client.

Trying to compare perf locking, cpu, and allocation traces, their profiles 
differ too much to be able to finger a 'culprit'. The Semaphore in the 
RpcScheduler gets 'blamed' in cpu and locking profiles but with it in place, 
our throughput goes up and thinking on it, it probably makes sense that threads 
coordinate around this point (if you or anyone have a better idea on how to do 
the handoff, I'm all ears). There are a few dumb things that I can fix but they 
won't gain us much. There is some macro change that I'm unable to discern at 
the mo.

bq ASYNC_WAL does not work. SYNC_WAL is default.

HBASE-16689

bq. What would this (flip to g1gc) entail?

Needs an owner. Said person would do some long runs where they'd figure some 
conservative defaults that would likely work in most cases and then they'd 
evangelize our move to G1GC (Message to list with "... CMS is deprecated...G1GC 
is the future..."). Then we'd flip. Would be cool if said person did stuff like 
run the long-range tests to see if MSLAB is still needed when running G1GC.

bq. It would be unfortunate to get such a big release of HBase without 
adjusting to the progress in jvm management.

Agree. 2.0.0 would be (have been) the right place to do it.

Thanks.

> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Assignee: stack
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: CAM-CONFIG-V01.patch, ITBLL2.5B_1.2.7vs2.0.0_cpu.png, 
> ITBLL2.5B_1.2.7vs2.0.0_gctime.png, ITBLL2.5B_1.2.7vs2.0.0_iops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_load.png, ITBLL2.5B_1.2.7vs2.0.0_memheap.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memstore.png, ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, 
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, 
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to