With a 5 server zk ensemble and an 80% write ratio, you should be able
to support about 10,000 operations per second[1]. That sounds
reasonable to me for most uses that require locks. If you require
higher performance than that, then locking probably isn't for you.
Taking advantage of versioning or other optimistic concurrency control
is probably necessary.

At a minimum, I think it makes sense to have zk-based locks as an
alternative to the current locks which tie up an RPC thread. Testing
will probably required to see how it performs under various
assumptions.

[1] 
http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperOver.html#Performance

On Thu, May 14, 2009 at 7:52 PM, Ryan Rawson <[email protected]> wrote:
> Ah I hate to be a cloud on a sunny day, but iirc, zk isn't designed for a
> high write load. With thousands of requests a second one could overwhelm the
> zk paxos consensus seeking protocol.
>
> Another thing to remember is hbase doesn't "overwrite" values, it just
> versions them. Perhaps this can be of help?
>
> On May 14, 2009 11:41 AM, "stack" <[email protected]> wrote:
>
> No consideration has been made for changes in how locks are done in new
> 0.20.0 API. Want to propose...
>
> On Thu, May 14, 2009 at 9:44 AM, Guilherme Germoglio
> <[email protected]>wrote:
>
>
>> This way, HTable could directly request for read or write row locks ( >
> http://hadoop.apache.org/z...
>

Reply via email to