[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714497#action_12714497
 ] 

Benjamin Reed commented on ZOOKEEPER-393:
-----------------------------------------

the difficulty is that we are reading entries to cache them, but we have a 
limited amount of memory for caching the response, so if we get too much data 
back, we will end up discarding data that doesn't fit into memory. since we 
would like to read ahead a few records at a time (say 10 for example), if the 
entries are bigger than we expect say 100K instead of 10K, we could end up 
dropping 90% of the data coming back which wastes server resources and network 
bandwidth.

we used the entry interface so that we can request specific entries, and our 
entries are variable size, so the streaming interface doesn't help us.

> Size limit for bookkeeper scans
> -------------------------------
>
>                 Key: ZOOKEEPER-393
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-393
>             Project: Zookeeper
>          Issue Type: Improvement
>          Components: contrib-bookkeeper
>            Reporter: Utkarsh Srivastava
>
> Right now a bookkeeper scan can limit the amount of data scanned by 
> specifying the number of entries to be scanned. But in many cases, if the 
> entries are of different sizes, it is not easy to know how many to scan.
> It is better to be able to specify both a count limit as well as a size limit 
> , with the semantics that the scan should stop if either of those limits is 
> reached.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to