[
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15015253#comment-15015253
]
ramkrishna.s.vasudevan commented on HBASE-13082:
------------------------------------------------
Did some JMH microbenchmark test by executing fixed set of code in a loop under
4 different conditions
1) With a reentrant lock
2) with a synchronized block
3) with a volatile boolean check
4) with no locks/synchronizations/volaties
This is the JMH result that I got
{code}
Benchmark Mode Cnt Score Error
Units
LockVsSynchronized.operationUnderLock thrpt 9 57.696 ± 0.322
ops/s
LockVsSynchronized.operationUnderSynchronized thrpt 9 44.692 ± 0.333
ops/s
LockVsSynchronized.operationUnderVolatile thrpt 9 1056.632 ± 5.584
ops/s
LockVsSynchronized.operationWithoutLock thrpt 9 1428.580 ± 5.372
ops/s
{code}
So we can see that when our operations are mostly single threaded and only at
times we need the parallelism like when we want to reset the scanner stack,
going with volatile is significantly faster than going with a read lock.
(though it is not as fast as without lock).
> Coarsen StoreScanner locks to RegionScanner
> -------------------------------------------
>
> Key: HBASE-13082
> URL: https://issues.apache.org/jira/browse/HBASE-13082
> Project: HBase
> Issue Type: Bug
> Reporter: Lars Hofhansl
> Assignee: ramkrishna.s.vasudevan
> Attachments: 13082-test.txt, 13082-v2.txt, 13082-v3.txt,
> 13082-v4.txt, 13082.txt, 13082.txt, HBASE-13082.pdf, HBASE-13082_1.pdf,
> HBASE-13082_12.patch, HBASE-13082_13.patch, HBASE-13082_14.patch,
> HBASE-13082_1_WIP.patch, HBASE-13082_2.pdf, HBASE-13082_2_WIP.patch,
> HBASE-13082_3.patch, HBASE-13082_4.patch, HBASE-13082_9.patch,
> HBASE-13082_9.patch, HBASE-13082_withoutpatch.jpg, HBASE-13082_withpatch.jpg,
> gc.png, gc.png, gc.png, hits.png, next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and
> required in the documentation (not picking on Phoenix, this one is my fault
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load.
> RegionScanner operations would keep getting the locks and the
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)