keeping the aggregate size to the normal max i think helps things a lot. we don't have to worry about a big update slowing everything down.

to implement this we probably need to add a new request and a new transaction. then you will get the atomic update property that you are looking for and you will not need to worry about special queue management.

ben

On 12/20/2010 10:08 PM, Ted Dunning wrote:
On Mon, Dec 20, 2010 at 9:24 PM, Benjamin Reed<[email protected]>  wrote:

are you guys going to put a limit on the size of the updates? can someone
do an update over 50 znodes where data value is 500K, for example?

Yes.  My plan is to put a limit on the aggregate size of all of the updates
that is equal to the limit that gets put on a single update normally.


if there is a failure during the update, is it okay for just a subset of
the znodes to be updated?

That would be an unpleasant alternative.

My thought was to convert all of the updates to idempotent form and add them
all to the queue or fail all the updates.

My hope was that there would be some way to mark the batch in the queue so
that they stay together when commits are pushed out to the cluster.  It
might be necessary to flush the queue before inserting the batched updates.
  Presumably something like this needs to be done now (if queue + current
transaction is too large, flush queue first).

Are there failure modes that would leave part of the queue committed and
part not?

Reply via email to