Hi!

> 1. delayed_commits helps you in high concurrency (batch mode) and less 
> in no concurrency.
You are wrong.

curl -X PUT -H "Content-Type: application/json" 
http://127.0.0.1:5984/_config/couchdb/delayed_commits -d "\"true\""
ab -k -c 1 -n 1000 -p ab.json -T "application/json" http://localhost:5984/bench

Requests per second:    173.71 [#/sec] (mean)

curl -X PUT -H "Content-Type: application/json" 
http://127.0.0.1:5984/_config/couchdb/delayed_commits -d "\"false\""
ab -k -c 1 -n 1000 -p ab.json -T "application/json" http://localhost:5984/bench

Requests per second:    10.15 [#/sec] (mean)

CPU & HDD utilization 0.01%.
It looks like there are some delays in TCP or HTTP.


> 2. If one wants to send in the same session lots of documents, that 
> someone uses bulk operation. Otherwise, high concurrency comes in play 
> by the means of batch mode.

Well, in real life there are many cases when events or data come not in batches 
but one by one.

In truth, it seems to me that this BATCH MODE MANTRA leads to defects in the 
code that simply are not investigated.

"OMG! My database do 10 requests per second (for ex. Postgres do 1000 rps on 
the same hardware with fsync on)
"Forget this, just use BATCH MODE"



-- 
 Konstantin Cherkasov

Reply via email to