[ 
https://issues.apache.org/jira/browse/HBASE-21657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16735455#comment-16735455
 ] 

Zheng Hu commented on HBASE-21657:
----------------------------------

Those days, I made some tests for the above cases:
||HBaseVersion||Storage||QPS&Latency||FlameGraph||Comment||
|HBase2.0.4|SSD|[^HBase2.0.4-ssd-10000000-rows-qps-latency.png]|[^HBase2.0.4-ssd-10000000-rows-flamegraph.svg]|regionCount=100,
 rows=10^7, dataSizeOfTable=1.5GB, cacheHitRatio=100%|
|HBase2.0.4 + 
patch.v2|SSD|[^HBase2.0.4-patch-v2-ssd-10000000-rows-qps-and-latency.png]|[^HBase2.0.4-patch-v2-ssd-10000000-rows.svg]|regionCount=100,
 rows=10^7, dataSizeOfTable=1.5GB, cacheHitRatio=100%|
|HBase1.4.9|SSD|[^HBase1.4.9-ssd-10000000-rows-qps-latency.png]|[^HBase1.4.9-ssd-10000000-rows-flamegraph.svg]|regionCount=100,
 rows=10^7, dataSizeOfTable=1.5GB, cacheHitRatio=100%|

Besides, I made an overview stastics :

!overview-statstics-1.png!

we can see that:  *the performance of hbase1.4.9 is almost the same as the 
HBase2.0.4 with patch.v2.*  So, I think we are in the right direction to 
optimize hbase2.0 performance. 

Now the problem is: how do we write the patch ? IMO,  we can move the 
getSerializedSize() (without no tag param)  and heapSize() to the Cell 
interface for eliminating the instanceof and class casting,   also the 
predetermined size of results arraylist will help a lot, not necessary to be 
1000, we can choose the Min(rows, 512) for avoiding cost too much memory for a 
scan with huge rows. 

Haven't looked at the inline method in detail yet, will try to do this.

[~stack] FYI

> PrivateCellUtil#estimatedSerializedSizeOf has been the bottleneck in 100% 
> scan case.
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-21657
>                 URL: https://issues.apache.org/jira/browse/HBASE-21657
>             Project: HBase
>          Issue Type: Bug
>          Components: Performance
>            Reporter: Zheng Hu
>            Assignee: Zheng Hu
>            Priority: Major
>             Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5
>
>         Attachments: HBASE-21657.v1.patch, HBASE-21657.v2.patch, 
> HBase1.4.9-ssd-10000000-rows-flamegraph.svg, 
> HBase1.4.9-ssd-10000000-rows-qps-latency.png, 
> HBase2.0.4-patch-v2-ssd-10000000-rows-qps-and-latency.png, 
> HBase2.0.4-patch-v2-ssd-10000000-rows.svg, 
> HBase2.0.4-ssd-10000000-rows-flamegraph.svg, 
> HBase2.0.4-ssd-10000000-rows-qps-latency.png, HBase2.0.4-with-patch.v2.png, 
> HBase2.0.4-without-patch-v2.png, hbase2.0.4-ssd-scan-traces.2.svg, 
> hbase2.0.4-ssd-scan-traces.svg, hbase20-ssd-100-scan-traces.svg, 
> overview-statstics-1.png
>
>
> We are evaluating the performance of branch-2, and find that the throughput 
> of scan in SSD cluster is almost the same as HDD cluster. so I made a 
> FlameGraph on RS, and found that the 
> PrivateCellUtil#estimatedSerializedSizeOf cost about 29% cpu, Obviously, it 
> has been the bottleneck in 100% scan case.
> See theĀ [^hbase20-ssd-100-scan-traces.svg]
> BTW, in our XiaoMi branch, we introduce a 
> HRegion#updateReadRequestsByCapacityUnitPerSecond to sum up the size of cells 
> (for metric monitor), so it seems the performance loss was amplified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to