[
https://issues.apache.org/jira/browse/ZOOKEEPER-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tamas Penzes updated ZOOKEEPER-1162:
------------------------------------
Fix Version/s: (was: 3.5.5)
> consistent handling of jute.maxbuffer when attempting to read large zk
> "directories"
> ------------------------------------------------------------------------------------
>
> Key: ZOOKEEPER-1162
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1162
> Project: ZooKeeper
> Issue Type: Improvement
> Components: server
> Affects Versions: 3.3.3
> Reporter: Jonathan Hsieh
> Assignee: Michael Han
> Priority: Major
> Fix For: 3.6.0
>
>
> Recently we encountered a sitaution where a zk directory got sucessfully
> populated with 250k elements. When our system attempted to read the znode
> dir, it failed because the contents of the dir exceeded the default 1mb
> jute.maxbuffer limit. There were a few odd things
> 1) It seems odd that we could populate to be very large but could not read
> the listing
> 2) The workaround was bumping up jute.maxbuffer on the client side
> Would it make more sense to have it reject adding new znodes if it exceeds
> jute.maxbuffer?
> Alternately, would it make sense to have zk dir listing ignore the
> jute.maxbuffer setting?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)