[ https://issues.apache.org/jira/browse/HBASE-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15197849#comment-15197849 ]
Esther Kundin commented on HBASE-8458: -------------------------------------- I actually have the same problem and attempted to fix with a coprocessor. An observer coprocessor is unable to do the appropriate locking if writes are happening from different clients. You cannot run a checkAndPut from within the coprocessor as by the time you are in the preBatchMutate, the row is locked for writing and a checkAndPut will be unable to run on the same row. If you implement it as an endpoint observer, then it should work, though I haven't tried it, but it will do a read/write lock on the row, which is a stricter locking than checkAndPut would do, so it would hurt performance more than a batch checkAndPut which will not block reads for as long. So it would seem that a batch checkAndPut would be the best solution. > Support for batch version of checkAndPut() and checkAndDelete() > --------------------------------------------------------------- > > Key: HBASE-8458 > URL: https://issues.apache.org/jira/browse/HBASE-8458 > Project: HBase > Issue Type: Improvement > Components: Client, regionserver > Affects Versions: 0.95.0 > Reporter: Hari Mankude > > The use case is that the user has multiple threads loading hundreds of keys > into a hbase table. Occasionally there are collisions in the keys being > uploaded by different threads. So for correctness, it is required to do > checkAndPut() instead of a put(). However, doing a checkAndPut() rpc for > every key update is non optimal. It would be good to have a batch version of > checkAndPut() similar to batch put(). The client can partition the keys on > region boundaries. > The jira is NOT looking for any type of cross-row locking or multi-row > atomicity with checkAndPut() > Batch version of checkAndDelete() is a similar requirement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)