[ 
https://issues.apache.org/jira/browse/HBASE-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342765#comment-14342765
 ] 

Lars Hofhansl commented on HBASE-13082:
---------------------------------------

Oh I get that :)
What I meant to say is that under no circumstances should fewer locks be slower 
than more unless there is contention. Lemme lock at HBASE-13071 to see if 
something sticks out.

Also check out HBASE-13109. Not entirely related, but it finally solves the 
SKIP vs. SEEK performance issues that have been plaguing us (SEEK is 
inefficient unless it can at least seek into the next block).

With these three we should see some nice perf gains for scanning.


> Coarsen StoreScanner locks to RegionScanner
> -------------------------------------------
>
>                 Key: HBASE-13082
>                 URL: https://issues.apache.org/jira/browse/HBASE-13082
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>         Attachments: 13082-test.txt, 13082.txt, 13082.txt, gc.png, gc.png, 
> next.png, next.png
>
>
> Continuing where HBASE-10015 left of.
> We can avoid locking (and memory fencing) inside StoreScanner by deferring to 
> the lock already held by the RegionScanner.
> In tests this shows quite a scan improvement and reduced CPU (the fences make 
> the cores wait for memory fetches).
> There are some drawbacks too:
> * All calls to RegionScanner need to be remain synchronized
> * Implementors of coprocessors need to be diligent in following the locking 
> contract. For example Phoenix does not lock RegionScanner.nextRaw() and 
> required in the documentation (not picking on Phoenix, this one is my fault 
> as I told them it's OK)
> * possible starving of flushes and compaction with heavy read load. 
> RegionScanner operations would keep getting the locks and the 
> flushes/compactions would not be able finalize the set of files.
> I'll have a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to