[ 
https://issues.apache.org/jira/browse/HBASE-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14383366#comment-14383366
 ] 

stack commented on HBASE-13291:
-------------------------------

bq. Yeah. Was thinking we use the index I build there to avoid the repeated 
getKeyValueLen calls.

k.

bq. I'm surprised that that StoreScanner.next() is only in there with 6% and 
StoreScanner.peek() not all. 

peek is way down... < 1%.  SS#next is yeah 6%/8%.

bq.  Is there anything different about P/E?

Rows here are ten column and on average 180k so if I do my math right, we are 
going at about same speed (I'm slower by some).  Maybe I should move to smaller 
cells because then changes will be more noticeable?

Issue now is reading the mvcc off the end of the Cell.  Its where I am spending 
time:

 18.70%  perf-16373.map      [.] 
Lorg/apache/hadoop/hbase/io/hfile/HFileReaderV2$ScannerV2;.readMvccVersion in 
Lorg/apache/hadoop/hbase/io/hfile/HFileReaderV3$ScannerV3;.readKeyValueLen

I've refactored it some. Will look more. And then SQM#match... and 
StoreScanner#optimize



> Lift the scan ceiling
> ---------------------
>
>                 Key: HBASE-13291
>                 URL: https://issues.apache.org/jira/browse/HBASE-13291
>             Project: HBase
>          Issue Type: Improvement
>          Components: Scanners
>    Affects Versions: 1.0.0
>            Reporter: stack
>            Assignee: stack
>         Attachments: 13291.inlining.txt, Screen Shot 2015-03-26 at 12.12.13 
> PM.png, Screen Shot 2015-03-26 at 3.39.33 PM.png, hack_to_bypass_bb.txt, 
> nonBBposAndInineMvccVint.txt, q (1).png, traces.7.svg, traces.filterall.svg, 
> traces.nofilter.svg, traces.small2.svg, traces.smaller.svg
>
>
> Scanning medium sized rows with multiple concurrent scanners exhibits 
> interesting 'ceiling' properties. A server runs at about 6.7k ops a second 
> using 450% of possible 1600% of CPUs  when 4 clients each with 10 threads 
> doing scan 1000 rows.  If I add '--filterAll' argument (do not return 
> results), then we run at 1450% of possible 1600% possible but we do 8k ops a 
> second.
> Let me attach flame graphs for two cases. Unfortunately, there is some 
> frustrating dark art going on. Let me try figure it... Filing issue in 
> meantime to keep score in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to