luca-simonetti commented on issue #5044:
URL: https://github.com/apache/couchdb/issues/5044#issuecomment-2096113379

   > With `include_docs=true` tweaking `[fabric] all_docs_concurrency = 10` 
might have an effect.
   > 
   > 
https://github.com/apache/couchdb/blob/main/src/fabric/src/fabric_view_all_docs.erl#L297
   > 
   > That spawns 10 background workers at the fabric (cluster level) to open 10 
documents in parallel, but that puts more work on the cluster. Can try 
decreasing it or increasing it a bit and see if it changes anything.
   
   We tried using this, but with no luck. 
   One thing that might be of any help, we see the memory increasing linearly 
as the requests are made.
   
   This is the memory in the last seven days:
   
   <img width="489" alt="Screenshot 2024-05-06 alle 16 08 14" 
src="https://github.com/apache/couchdb/assets/44196863/a3bba0b7-1f07-42ea-a837-005e77995efa";>
   
   The points were the memory rapidly decreases are the restarts of the couchdb 
service, which make the system fast again (for a couple of minutes).
   I'm really wondering if there's any kind of memory leak somewhere.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to