[ 
https://issues.apache.org/jira/browse/HBASE-7263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13525981#comment-13525981
 ] 

Gregory Chanan commented on HBASE-7263:
---------------------------------------

Arg, looks like JIRA ate my previous comment.

[~lhofhansl] I wanted to run one more thing by you before I write more code for 
this.

Why do we need the row locks (at least in the case the user never explictly 
grabs a rowlock).  On the read side,
the scanner just ignores KVs with a memstoreTS higher than its readPoint, so 
doesn't seem to be a problem there.
On the write side, we update the memstore and prepare the WALEdit under the row 
lock, both of which seem like
they can be done concurrently.

Now, given that we allow the user to grab row locks, we need some concurrency 
control.  Could we use the read-locks
I describe above for the common case (no lock passed in) and grant the user a 
write lock if they explicitly ask for it
via HTable.lockRow?  I guess frequent updates could stall the call to lockRow, 
but I think those calls are rare.

Am I missing something?
                
> Investigate more fine grained locking for checkAndPut/append/increment
> ----------------------------------------------------------------------
>
>                 Key: HBASE-7263
>                 URL: https://issues.apache.org/jira/browse/HBASE-7263
>             Project: HBase
>          Issue Type: Improvement
>          Components: Transactions/MVCC
>            Reporter: Gregory Chanan
>            Assignee: Gregory Chanan
>            Priority: Minor
>
> HBASE-7051 lists 3 options for fixing an ACID-violation wrt checkAndPut:
> {quote}
> 1) Waiting for the MVCC to advance for read/updates: the downside is that you 
> have to wait for updates on other rows.
> 2) Have an MVCC per-row (table configuration): this avoids the unnecessary 
> contention of 1)
> 3) Transform the read/updates to write-only with rollup on read.. E.g. an 
> increment would just have the number of values to increment.
> {quote}
> HBASE-7051 and HBASE-4583 implement option #1.  The downside, as mentioned, 
> is that you have to wait for updates on other rows, since MVCC is per-row.
> Another option occurred to me that I think is worth investigating: rely on a 
> row-level read/write lock rather than MVCC.
> Here is pseudo-code for what exists today for read/updates like checkAndPut
> {code}
> (1)  Acquire RowLock
> (1a) BeginMVCC + Finish MVCC
> (2)  Begin MVCC
> (3)  Do work
> (4)  Release RowLock
> (5)  Append to WAL
> (6)  Finish MVCC
> {code}
> Write-only operations (e.g. puts) are the same, just without step 1a.
> Now, consider the following instead:
> {code}
> (1)  Acquire RowLock
> (1a) Grab+Release RowWriteLock (instead of BeginMVCC + Finish MVCC)
> (1b) Grab RowReadLock (new step!)
> (2)  Begin MVCC
> (3)  Do work
> (4)  Release RowLock
> (5)  Append to WAL
> (6)  Finish MVCC
> (7)  Release RowReadLock (new step!)
> {code}
> As before, write-only operations are the same, just without step 1a.
> The difference here is that writes grab a row-level read lock and hold it 
> until the MVCC is completed.  The nice property that this gives you is that 
> read/updates can tell when the MVCC is done on a per-row basis, because they 
> can just try to acquire the write-lock which will block until the MVCC is 
> competed for that row in step 7.
> There is overhead for acquiring the read lock that I need to measure, but it 
> should be small, since there will never be any blocking on acquiring the 
> row-level read lock.  This is because the read lock can only block if someone 
> else holds the write lock, but both the write and read lock are only acquired 
> under the row lock.
> I ran a quick test of this approach over a region (this directly interacts 
> with HRegion, so no client effects):
> - 30 threads
> - 5000 increments per thread
> - 30 columns per increment
> - Each increment uniformly distributed over 500,000 rows
> - 5 trials
> Better-Than-Theoretical-Max: (No locking or MVCC on step 1a): 10362.2 ms
> Today: 13950 ms
> The locking approach: 10877 ms
> So it looks like an improvement, at least wrt increment.  As mentioned, I 
> need to measure the overhead of acquiring the read lock for puts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to