A quick parenthesis while talking about hitting the buffer limit when listing children:
https://issues.apache.org/jira/browse/ZOOKEEPER-2260 Paginated getChildren call It would be great to have some feedback on this patch. On 9/14/15 2:40 AM, Karol Dudzinski wrote: > Just had a very quick look at the code (on my phone so not that easy to read) > and I think what I observed can be explained by the fact that > BinaryInputArchive uses that property but BinaryOutputArchive does not. > Therefore it sounds like a response of arbitrary size can be sent but it can > only be received successfully if it's smaller than the buffer size. That > would explain why I only needed the property on my client but if you want > data larger than 1mb you'd need to set it on the server. > > Perhaps a committer (or someone not reading the code on a phone) can confirm. > > Karol > >> On 14 Sep 2015, at 09:34, Karol Dudzinski <[email protected]> wrote: >> >> I think there might be a bit more to this. I've not set the buffer size on >> the servers as I don't have a need to store anywhere near 1mb per node. >> However, I have hit the buffer size limit while calling getChildren. I set >> the buffer size only on the client calling getChildren and nowhere else and >> the exception went away so I'm not convinced the server uses that property >> for every response. >> >> I agree with Jordan that the docs are slightly confusing as they don't even >> mention that this property will affect getChildren calls for example. >> However, I can't say I've looked at this bit of code but that's certainly >> the behaviour I observed. >> >> Karol >> >>> On 13 Sep 2015, at 22:24, Jordan Zimmerman <[email protected]> >>> wrote: >>> >>> Why all client has to set this limit. If on zookeeper side we have >>> configured this then it should use that. No? >>> My read of the code shows that the clients also use jute.maxBuffer. >>> Hopefully, one of the ZK committers can comment on this. But, jute (see >>> BinaryInputArchive.java) is used on both the client and the server. >>> >>> I don't see any methods in CuratorrFramework which takes this variable. >>> Per the ZK docs: >>> >>> >>> >>> jute.maxbuffer: >>> (Java system property: jute.maxbuffer) >>> >>> This option can only be set as a Java system property. There is no >>> zookeeper prefix on it. >>> >>> >>> >>> >>> >>> On September 13, 2015 at 4:22:46 PM, Check Peck ([email protected]) >>> wrote: >>> >>> Why all client has to set this limit. If on zookeeper side we have >>> configured this then it should use that. No? >>> >>> Also how can we set this parameter through Curator? I don't see any methods >>> in CuratorrFramework which takes this variable. >>> >>> On Sun, Sep 13, 2015 at 2:17 PM, Jordan Zimmerman >>> <[email protected]> wrote: >>> Yes, all clients. Curator just wraps ZooKeeper so all the same things apply. >>> >>> -Jordan >>> >>> >>> >>> On September 13, 2015 at 4:14:17 PM, Check Peck ([email protected]) >>> wrote: >>> >>> I mean this code for getting children not earlier one, that is for data. >>> >>> List<String> children = client.getChildren().forPath(path); >>> >>> On Sun, Sep 13, 2015 at 2:10 PM, Check Peck <[email protected]> wrote: >>> Ok understood that part and I was correct on the 500 children nodes znode >>> name example right? >>> >>> If I can increase the jute.maxBuffer property, I do need to restart the >>> whole cluster. Correct? Is there any problem if I increase this? >>> >>> On Sun, Sep 13, 2015 at 2:08 PM, Jordan Zimmerman >>> <[email protected]> wrote: >>> No, the 1MB limit will hit you there as well. If you try to get your data >>> (via getData() call) it will fail. Note, you can increase the value via the >>> jute.maxBuffer property. >>> >>> -Jordan >>> >>> On September 13, 2015 at 4:06:48 PM, Check Peck ([email protected]) >>> wrote: >>> >>> What about data in those znode? They can be more than 1MB in each znode >>> correct? >>> >>> >>> >>>
