On Mon, Dec 20, 2010 at 9:24 PM, Benjamin Reed <br...@yahoo-inc.com> wrote:

> are you guys going to put a limit on the size of the updates? can someone
> do an update over 50 znodes where data value is 500K, for example?
>

Yes.  My plan is to put a limit on the aggregate size of all of the updates
that is equal to the limit that gets put on a single update normally.


> if there is a failure during the update, is it okay for just a subset of
> the znodes to be updated?
>

That would be an unpleasant alternative.

My thought was to convert all of the updates to idempotent form and add them
all to the queue or fail all the updates.

My hope was that there would be some way to mark the batch in the queue so
that they stay together when commits are pushed out to the cluster.  It
might be necessary to flush the queue before inserting the batched updates.
 Presumably something like this needs to be done now (if queue + current
transaction is too large, flush queue first).

Are there failure modes that would leave part of the queue committed and
part not?

Reply via email to