[
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack updated HBASE-13082:
--------------------------
Attachment: gc.png
hits.png
Ok, redid the testing. The way to read the graphs is that the first two humps
are us up against a scan "ceiling" where I had many clients trying to max out
the regionserver (pushing out about 1Gbs and using about 5/6 of 16 cores). The
second two are simple client with just two scan threads running.
The humps are nopatch/patched/patched/nopathed (it was easier to do it this
way).
With the patch there is perhaps slightly less GC and perhaps slightly more
throughput -- not as dramatic as first compares.
+1 on commit to master. I think you should put it in 1.1. too.
> Coarsen StoreScanner locks to RegionScanner
> -------------------------------------------
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
> Issue Type: Bug
> Reporter: Lars Hofhansl
> Attachments: 13082-test.txt, 13082.txt, 13082.txt, gc.png, gc.png,
> gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and
> required in the documentation (not picking on Phoenix, this one is my fault
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load.
> RegionScanner operations would keep getting the locks and the
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)