I think this is a pretty common use-case actually. If one client has put something into zookeeper and another client is trying to pull it out it may not know it advance how big the data client#1 put in. What we do locally is have a wrapper around zoo_get that starts with a reasonable default for buffer_len. If it fails b/c the data is larger than that, then you can inspect the size inside the returned Stat* and then re-issue the get with the correct value.
On Fri, Jan 10, 2014 at 10:02 AM, Raúl Gutiérrez Segalés < [email protected]> wrote: > On 10 January 2014 08:04, Kah-Chan Low <[email protected]> wrote: > > > int zoo_get(zhandle_t *zh, const char *path, int watch, char *buffer, > > int* buffer_len, struct Stat *stat) > > > > Developer has to anticipate the max. size of the node data. Is there any > > way to get around this? > > > > In which case would you not know the size? > > -rgs >
