[
https://issues.apache.org/jira/browse/HBASE-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack updated HBASE-13291:
--------------------------
Attachment: traces.smaller.svg
Thanks @apurtell for input. Yeah, need another view than this. My flight
recordings do not surface obvious contention. Interesting they do point at
HBB#_get as hot method for some reason. I messed around w/ some code and its
only an array look up. Started me on a distraction looking to see what is
inlined and what is not.
This is the parent method:
@ 186
org.apache.hadoop.hbase.io.hfile.HFileReaderV3$ScannerV3::readKeyValueLen (348
bytes) hot method too big
I messed with it and got a new, also interesting flame graph (--filterAll).
Compare to filterAll.svg. Throughput went up a little. Let me look some more
here for a while. Will report back.
Will need to then circle back and try and figure how we are bound up...
(returning results to client seems to enable one bottleneck -- but let me see
if can figure why near 3x CPU but only 15% more ops... if I am reading it right)
> Lift the scan ceiling
> ---------------------
>
> Key: HBASE-13291
> URL: https://issues.apache.org/jira/browse/HBASE-13291
> Project: HBase
> Issue Type: Improvement
> Components: Scanners
> Affects Versions: 1.0.0
> Reporter: stack
> Assignee: stack
> Attachments: traces.filterall.svg, traces.nofilter.svg,
> traces.smaller.svg
>
>
> Scanning medium sized rows with multiple concurrent scanners exhibits
> interesting 'ceiling' properties. A server runs at about 6.7k ops a second
> using 450% of possible 1600% of CPUs when 4 clients each with 10 threads
> doing scan 1000 rows. If I add '--filterAll' argument (do not return
> results), then we run at 1450% of possible 1600% possible but we do 8k ops a
> second.
> Let me attach flame graphs for two cases. Unfortunately, there is some
> frustrating dark art going on. Let me try figure it... Filing issue in
> meantime to keep score in.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)