[ 
https://issues.apache.org/jira/browse/HBASE-21657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736640#comment-16736640
 ] 

Zheng Hu commented on HBASE-21657:
----------------------------------

{quote}Did you dump the ycsb output into mysql? Nice.
{quote}
I write a python script to parse the log and load to mysql, you can see the 
script if it can help[1].
{quote}How we get away with adding getSerializedSize to Cell w/o using a 
default impl?
{quote}
Because actually, almost all of the cell are ExtendedCell except some util Cell 
class which just implement the Cell interface (So I add the implementation).
{quote}For sure its just doing this?
{quote}
The above stack in [2] looks strange, seems some paths will parse the 
rowOffset/rowLength , famOffset/famLength to calculate the serialized size.. 
will dig this deeper. On the whole, remove the instanceof and class cast seems 
save a lot.
{quote}You have all data cached when you run your test? 99% hit rate or 
something?
{quote}
Yeah, almost 100% cache hit rate. the test data size is small, only 1.5GB, but 
my cluster have 5 Node with 50GB onheap + 50GB offheap.
{quote}Whats the ycsb command you run if you don't mind?
{quote}
Of course. my workload is:
{code:java}
table=ycsb-test
columnfamily=C
recordcount=10000000
operationcount=10000000
workload=com.yahoo.ycsb.workloads.CoreWorkload
fieldlength=100
fieldcount=1

clientbuffering=true
  
readallfields=true
writeallfields=true
  
readproportion=0
updateproportion=0
scanproportion=1.0
insertproportion=0
  
requestdistribution=zipfian
{code}
and use the command to load data firstly, then flush table and major compact 
the table:
{code:java}
nohup ./bin/ycsb load hbase10 -P workload -s -threads 120 > load.log  2>&1 &
# flush 'ycsb-test'
# major_compact 'ycsb-test'
{code}
Then use the command to run the workload:
{code:java}
nohup ./bin/ycsb run hbase10 -P workload -s -threads 120 > run.log  2>&1 &
{code}
finally, load the run.log to mysql table:
{code}
./ycsb-data.py run.log HBase2.0.4-ssd-10000000-rows
{code}

{quote}How you get the flamegraphs? They are jvm only so via honest-profiler or 
async-profiler? Whats thte setup?
{quote}
I used the 
[lightweight-java-profiler|https://github.com/dcapwell/lightweight-java-profiler],
 you can find the 
[wiki|http://www.brendangregg.com/blog/2014-06-12/java-flame-graphs.html] here

1. [https://gist.github.com/openinx/c1f19aa3ee93c045317a3ae59bc4a148]
 2. 
https://issues.apache.org/jira/browse/HBASE-21657?focusedCommentId=16735710&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16735710

> PrivateCellUtil#estimatedSerializedSizeOf has been the bottleneck in 100% 
> scan case.
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-21657
>                 URL: https://issues.apache.org/jira/browse/HBASE-21657
>             Project: HBase
>          Issue Type: Bug
>          Components: Performance
>            Reporter: Zheng Hu
>            Assignee: Zheng Hu
>            Priority: Major
>             Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5
>
>         Attachments: HBASE-21657.v1.patch, HBASE-21657.v2.patch, 
> HBASE-21657.v3.patch, HBASE-21657.v3.patch, 
> HBase1.4.9-ssd-10000000-rows-flamegraph.svg, 
> HBase1.4.9-ssd-10000000-rows-qps-latency.png, 
> HBase2.0.4-patch-v2-ssd-10000000-rows-qps-and-latency.png, 
> HBase2.0.4-patch-v2-ssd-10000000-rows.svg, 
> HBase2.0.4-patch-v3-ssd-10000000-rows-flamegraph.svg, 
> HBase2.0.4-patch-v3-ssd-10000000-rows-qps-and-latency.png, 
> HBase2.0.4-ssd-10000000-rows-flamegraph.svg, 
> HBase2.0.4-ssd-10000000-rows-qps-latency.png, HBase2.0.4-with-patch.v2.png, 
> HBase2.0.4-without-patch-v2.png, hbase2.0.4-ssd-scan-traces.2.svg, 
> hbase2.0.4-ssd-scan-traces.svg, hbase20-ssd-100-scan-traces.svg, 
> image-2019-01-07-19-03-37-930.png, image-2019-01-07-19-03-55-577.png, 
> overview-statstics-1.png, run.log
>
>
> We are evaluating the performance of branch-2, and find that the throughput 
> of scan in SSD cluster is almost the same as HDD cluster. so I made a 
> FlameGraph on RS, and found that the 
> PrivateCellUtil#estimatedSerializedSizeOf cost about 29% cpu, Obviously, it 
> has been the bottleneck in 100% scan case.
> See theĀ [^hbase20-ssd-100-scan-traces.svg]
> BTW, in our XiaoMi branch, we introduce a 
> HRegion#updateReadRequestsByCapacityUnitPerSecond to sum up the size of cells 
> (for metric monitor), so it seems the performance loss was amplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to