[jira] [Commented] (ZOOKEEPER-2260) Paginated getChildren call

2016-01-04 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081528#comment-15081528
 ] 

Jonathan Hsieh commented on ZOOKEEPER-2260:
---

Looks like this could be a solution to an issue I filed a long time ago -- 
ZOOKEEPER-1162.

> Paginated getChildren call
> --
>
> Key: ZOOKEEPER-2260
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2260
> Project: ZooKeeper
>  Issue Type: New Feature
>Affects Versions: 3.4.5, 3.4.6, 3.5.0, 4.0.0
>Reporter: Marco P.
>Priority: Minor
>  Labels: api, features
> Fix For: 4.0.0
>
> Attachments: ZOOKEEPER-2260.patch, ZOOKEEPER-2260.patch
>
>
> Add pagination support to the getChildren() call, allowing clients to iterate 
> over children N at the time.
> Motivations for this include:
>   - Getting out of a situation where so many children were created that 
> listing them exceeded the network buffer sizes (making it impossible to 
> recover by deleting)[1]
>  - More efficient traversal of nodes with large number of children [2]
> I do have a patch (for 3.4.6) we've been using successfully for a while, but 
> I suspect much more work is needed for this to be accepted. 
> [1] https://issues.apache.org/jira/browse/ZOOKEEPER-272
> [2] https://issues.apache.org/jira/browse/ZOOKEEPER-282



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZOOKEEPER-1162) consistent handling of jute.maxbuffer when attempting to read large zk directories

2011-08-25 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13091323#comment-13091323
 ] 

Jonathan Hsieh commented on ZOOKEEPER-1162:
---

Basically, I feel that to be consistent behavior-wise it should either:

1) reject on write when dir becomes too big, keeping the current read 
constraint (ideally in zk, as opposed to the client)
2) accept write like currently but allow the read to then succeed in this 
particular case.  
3) warn when writing when it gets too big, and then allow reads to succeed even 
if too big.  

 consistent handling of jute.maxbuffer when attempting to read large zk 
 directories
 

 Key: ZOOKEEPER-1162
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1162
 Project: ZooKeeper
  Issue Type: Improvement
  Components: server
Affects Versions: 3.3.3
Reporter: Jonathan Hsieh
Priority: Critical
 Fix For: 3.5.0


 Recently we encountered a sitaution where a zk directory got sucessfully 
 populated with 250k elements.  When our system attempted to read the znode 
 dir, it failed because the contents of the dir exceeded the default 1mb 
 jute.maxbuffer limit.  There were a few odd things
 1) It seems odd that we could populate to be very large but could not read 
 the listing 
 2) The workaround was bumping up jute.maxbuffer on the client side setting.
 Would it make more sense to have it reject adding new znodes if it exceeds 
 jute.maxbuffer? 
 Alternately, would it make sense to have zk dir listing ignore the 
 jute.maxbuffer setting?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (ZOOKEEPER-1162) consistent handling of jute.maxbuffer when attempting to read large zk directories

2011-08-24 Thread Jonathan Hsieh (JIRA)
consistent handling of jute.maxbuffer when attempting to read large zk 
directories


 Key: ZOOKEEPER-1162
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1162
 Project: ZooKeeper
  Issue Type: Improvement
Affects Versions: 3.3.3
Reporter: Jonathan Hsieh


Recently we encountered a sitaution where a zk directory got sucessfully 
populated with 250k elements.  When our system attempted to read the znode dir, 
it failed because the contents of the dir exceeded the default 1mb 
jute.maxbuffer limit.  There were a few odd things

1) It seems odd that we could populate to be very large but could not read the 
listing 
2) The workaround was bumping up jute.maxbuffer on the client side setting.

Would it make more sense to have it reject adding new znodes if it exceeds 
jute.maxbuffer? 
Alternately, would it make sense to have zk dir listing ignore the 
jute.maxbuffer setting?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (ZOOKEEPER-1025) zkCli is overly sensitive to to spaces.

2011-03-21 Thread Jonathan Hsieh (JIRA)
zkCli is overly sensitive to to spaces.
---

 Key: ZOOKEEPER-1025
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1025
 Project: ZooKeeper
  Issue Type: Improvement
  Components: java client
Reporter: Jonathan Hsieh


Here's an example: 

I do an ls to get znode names. I try to stat a znode.  
{code}
[zk: localhost:3181(CONNECTED) 1] ls /flume-nodes
[nodes02, nodes01, nodes00, nodes05, 
nodes04, nodes03]
[zk: localhost:3181(CONNECTED) 3] stat /flume-nodes/nodes02 
cZxid = 0xb
ctime = Sun Mar 20 23:24:03 PDT 2011
... (success)
{code}

Here's something that almost looks the same.  Notice the extra space infront of 
the znode name.

{code}
[zk: localhost:3181(CONNECTED) 2] stat  /flume-nodes/nodes02
Command failed: java.lang.IllegalArgumentException: Path length must be  0
{code}

This seems like unexpected behavior.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira