Another way to solve this is to use the multi command.

The idea would be that you would upload multiple pieces of the large object
separately into different znodes (without using multi)

Then you would update a pointer node that has references to the pieces while
controlling for the version of the pieces (using a multi).

On Thu, Aug 11, 2011 at 6:08 AM, Will Johnson
<[email protected]>wrote:

> We have a situation where 99.9% of all data stored in zookeeper will be
> well
> under the 1mb limit (probably under 1k as well) but there is a small
> possibility that at some point users may do something to cross that
> barrier.  ...
> Is there some configuration parameter I am missing or code change i can
> make?  Or have people solved this another way?
>

Reply via email to