Hello,

I just read the message about concurrent access in this mailing list
http://mail-archives.apache.org/mod_mbox/hadoop-hbase-user/200804.mbox/[EMAIL 
PROTECTED]

and I just want to verify this:

If I use the future new version of hbase, i.e. 0.2.0 which enables the
BatchUpdate objects.
All my update are made in an atomic way. Ok. But there is no lock mechanism
to avoid write-after-write fault, if, for example, I would like to have a
code like: ( If havn't found the 0.2.0 api online, so its just pseudo code )

// load a cell from hbase
Cell c = table.get("row", "column_family:column");
// make some computation with the cell content
int oldValue = Integer.parseInt(c.getContent());
int newValue = oldValue + 1;
// update the cell
BatchUpdate bu = new BatchUpdate(c);
c.setContent(String.valueOf(newContent));
bu.commit();

That is, take the current value of a cell, update it in a context-dependant
toward cell content, and save it.

I guess there is NO mechanism ensuring that another client that runs the
same code as the same time, got the same value for the cell than me and
increments it. It means that rather than having cell value sequence 1 -> 2
-> 3 -> 4 it takes the values 1 -> 2 -> 2 -> 3.

Am I right ? 

Should a lock mechanism be provided to the user through another system such
ZooKeeper or is it in the goal of hbase to provide such lock system ?

Thanks a lot.
Have a nice day

-- 
View this message in context: 
http://www.nabble.com/HBase-concurrent-access-and-BatchUpdate-obj-tp18848763p18848763.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to