nickva commented on issue #5044:
URL: https://github.com/apache/couchdb/issues/5044#issuecomment-2100810799

   > for an extreme test we tried to set [fabric] all_docs_concurrency = 10000 
(ten thousands) and it suddenly worked perfectly for the first 8.000 requests.
   
   > After restarting couchdb it went back to the previous state where it takes 
the token and then the 100 subdocuments.
   If you could explain what all_docs_concurrency does, since it's not 
documented.
   
   All doc_docs concurrency limit how many parallel doc read to do at the 
cluster level. That speeds up the `include_docs=true` `_all_docs` but at the 
expensive of consuming more resources per request.
   
   Another factor to play with is Q. What is your Q value for the dbs? By 
default it's 2. You can try experimenting with higher values like 8, 16, 64.
   
   > UPDATE2: Idk if this is the case, but it could just be related to some 
hyperledger internal caching. I'm really struggling how the whole system works
   
   If on your containers your CPU and memory usage are not very high, it does 
seem like a slow disk problem issue but a disk with a page  cache or other 
limited faster cache in front of it. Try using a faster local disk or another 
setup as an experiment with the same container and see what numbers you get.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to