[ https://issues.apache.org/jira/browse/OAK-6180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16000523#comment-16000523 ]
Marcel Reutegger commented on OAK-6180: --------------------------------------- Cases I've seen so far are usually 1) are there any children? 2) get the first few child nodes and 3) read all child nodes. So, you are probably right, there are not that many cases where bandwidth is wasted. However, I still consider the timeout issue as rather important. > Tune cursor batch/limit size > ---------------------------- > > Key: OAK-6180 > URL: https://issues.apache.org/jira/browse/OAK-6180 > Project: Jackrabbit Oak > Issue Type: Improvement > Components: mongomk > Reporter: Marcel Reutegger > Assignee: Marcel Reutegger > Fix For: 1.8 > > > MongoDocumentStore uses the default batch size, which means MongoDB will > initially get 100 documents and then as many documents as fit into 4MB. > Depending on the document size, the number of documents may be quite high and > the risk of running into the 60 seconds query timeout defined by Oak > increases. > Tuning the batch size (or using a limit) may also be helpful in optimizing > the amount of data transferred from MongoDB to Oak. The DocumentNodeStore > fetches child nodes in batches as well. The logic there is slightly > different. The initial batch size is 100 and every subsequent batch doubles > in size until it reaches 1600. Bandwidth is wasted if the MongoDB Java driver > fetches way more than requested by Oak. -- This message was sent by Atlassian JIRA (v6.3.15#6346)