janl commented on PR #5399:
URL: https://github.com/apache/couchdb/pull/5399#issuecomment-2613521024

   I did some testing here as well:
   
   It’s quite impressive. Especially in terms of added concurrency. main does 
10k rps on a single node with a concurrency level of 10–15 requesting a single 
doc. It drops with larger concurrency. parallel_preads does 25k rps (!) and 
handily goes up to 200 concurrent requests. Couldn’t go further for ulimit 
reasons that I can’t bother to sort out tonight. But that’s quite the win
   
   —
   
   another preads test: I»m using test/bench/benchbulk to write 1M 10byte docs 
in batches of 1000 into a db and ab to read the first inserted doc with 
concurrency 15 for 100k times. in both main an parallel_preads adding the reads 
slows the writes down noticeably, but the read rps roughly stays the same (only 
concurrent_preads roughly does 2x over main 
   
   dialled it up to 1M reads, so read and write tests roughly have the same 
duration / effect on each other
   
   inserting 1M docs while reading the same doc 1M times with  15 concurrent 
readers under parallel_preads:
   
   ```
   real 1m27.434s
   user 0m21.057s
   sys  0m14.472s
   ```
   
   reads test ran for 84.316 seconds vs 87.434 ^
   
   just inserting the docs without reads:
   
   ```
   real 0m17.923s
   user 0m19.268s
   sys  0m8.213s
   ```
   
   for the r/w test on main we come out at 7100rps, 2–3ms response times, with 
longest request 69ms
   
   vs parallel_preads: 11800rps, 1–2ms, worst case 67ms,
   
   so while quite a bit more throughput, concurrent reads and writes still 
block each other.
   
   this is on an M4 Pro 12 core box with essentially infinite IOPS and memory 
for the sake of this test, 10 4.5GHz CPUs + 4 slower ones. (fastest consumer 
CPU on the market atm).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to