[ 
https://issues.apache.org/jira/browse/HBASE-20188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16425818#comment-16425818
 ] 

stack edited comment on HBASE-20188 at 4/4/18 4:36 PM:
-------------------------------------------------------

[~anastas]
bq. If you are running again can you please also add to this run the " plus the 
default 2.0 RPC scheduler, FastPath"?

Did that. The latter runs are using the default's which includes the fastpath 
RPC scheduler. It improves throughput though it looks bad when you look at 
locking dumps.

[~eshcar]
bq. Are you still using 31GB heap in your runs? 31GB heap for 25GB of data is 
too much. With 8GB I think the gc affect is more pronounced

I can try that. I've been using 31G so most data is out of cache. I was trying 
to eliminate i/o. I can roll it back in now I've figured a problem w/ i/o reads.

For short-circuit reads, see  HBASE-20337. Shout if not clear.

And yes, I won't change my setup just yet. Trying to keep it constant for the 
moment.

Thanks





was (Author: stack):
[~anastas]
bq. If you are running again can you please also add to this run the " plus the 
default 2.0 RPC scheduler, FastPath"?

Did that. The latter runs are using the default's which includes the fastpath 
RPC scheduler. It improves throughput though it looks bad when you look at 
locking dumps.

[~eshcar]
bq. Are you still using 31GB heap in your runs? 31GB heap for 25GB of data is 
too much. With 8GB I think the gc affect is more pronounced

I can try that. I've been using 31G so most data is out of cache. I was trying 
to eliminate i/o. I can roll it back in now I've figured a problem w/ i/o reads.

For short-circuit reads, see  HBASE-20337. Shout if not clear.

Thanks




> [TESTING] Performance
> ---------------------
>
>                 Key: HBASE-20188
>                 URL: https://issues.apache.org/jira/browse/HBASE-20188
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Performance
>            Reporter: stack
>            Assignee: stack
>            Priority: Blocker
>             Fix For: 2.0.0
>
>         Attachments: CAM-CONFIG-V01.patch, HBASE-20188.sh, HBase 2.0 
> performance evaluation - Basic vs None_ system settings.pdf, 
> ITBLL2.5B_1.2.7vs2.0.0_cpu.png, ITBLL2.5B_1.2.7vs2.0.0_gctime.png, 
> ITBLL2.5B_1.2.7vs2.0.0_iops.png, ITBLL2.5B_1.2.7vs2.0.0_load.png, 
> ITBLL2.5B_1.2.7vs2.0.0_memheap.png, ITBLL2.5B_1.2.7vs2.0.0_memstore.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops.png, 
> ITBLL2.5B_1.2.7vs2.0.0_ops_NOT_summing_regions.png, YCSB_CPU.png, 
> YCSB_GC_TIME.png, YCSB_IN_MEMORY_COMPACTION=NONE.ops.png, YCSB_MEMSTORE.png, 
> YCSB_OPs.png, YCSB_in-memory-compaction=NONE.ops.png, YCSB_load.png, 
> flamegraph-1072.1.svg, flamegraph-1072.2.svg, hbase-env.sh, hbase-site.xml, 
> lock.127.workloadc.20180402T200918Z.svg, 
> lock.2.memsize2.c.20180403T160257Z.svg, run_ycsb.sh, tree.txt
>
>
> How does 2.0.0 compare to old versions? Is it faster, slower? There is rumor 
> that it is much slower, that the problem is the asyncwal writing. Does 
> in-memory compaction slow us down or speed us up? What happens when you 
> enable offheaping?
> Keep notes here in this umbrella issue. Need to be able to say something 
> about perf when 2.0.0 ships.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to