Hi,

I am not sure what is your point with this thread because I notice frustration and mockery in your posts, but failing to understand why. Maybe because I am slow, who knows? You joined this and devs list just to prove that CouchDB is slow? If you don't like CouchDB, PostgreSQL is faster and it's all you need (speed for one session in serialized writing), why bother to write here?

And about batch mode, you know you exaggerated a bit. You can use bulk operation in the real life as well (I do that as it is required by the type of my projects). It's not so difficult to create a buffer and to send all the documents at once using this kind of operation (if you expect so many docs one-by-one from the same session). The result in this case becomes even better than in PostgreSQL case (the case you were speaking of) if that matters to you. Also, coming back to the batch mode, I suppose there more events fitting this profile than the one you were speaking about (blogging is more often required by the web industry than the rest of the cases). If you refer to messaging for example, 10 documents (or those 173 you found with delayed_commits true, but I take the worst case) are enough for such cases. And now, summing up, blogging and messaging are the most popular requests in the web industry today. For all the rest of the projects which requires automation, you can use the bulk operation I mentioned before (if you want to serialize the writing part).

So, I repeat, maybe it's me, but I don't see the point of your complaint, especially that nobody forced you to join this community. If your boss (if you have any) is forcing you to use this product and you don't like it, find ways to improve the speed for what you need. Nobody forces you to come here and say "your product sucks because in one particular case it doesn't work as I want!" I also found many products being incompatible with the requirements of my projects, but I don't tell the users community this or that product is bad and useless. If I really need that product and I cannot make it run as I need, I just ask for help if it is possible to tune in that product for my needs. Maybe it's my way of behaving wrong.

Good luck!
CGS






On 10/25/2011 11:19 PM, Konstantin Cherkasov wrote:
Hi!

1. delayed_commits helps you in high concurrency (batch mode) and less
in no concurrency.
You are wrong.

curl -X PUT -H "Content-Type: application/json" 
http://127.0.0.1:5984/_config/couchdb/delayed_commits -d "\"true\""
ab -k -c 1 -n 1000 -p ab.json -T "application/json" http://localhost:5984/bench

Requests per second:    173.71 [#/sec] (mean)

curl -X PUT -H "Content-Type: application/json" 
http://127.0.0.1:5984/_config/couchdb/delayed_commits -d "\"false\""
ab -k -c 1 -n 1000 -p ab.json -T "application/json" http://localhost:5984/bench

Requests per second:    10.15 [#/sec] (mean)

CPU&  HDD utilization 0.01%.
It looks like there are some delays in TCP or HTTP.


2. If one wants to send in the same session lots of documents, that
someone uses bulk operation. Otherwise, high concurrency comes in play
by the means of batch mode.
Well, in real life there are many cases when events or data come not in batches 
but one by one.

In truth, it seems to me that this BATCH MODE MANTRA leads to defects in the 
code that simply are not investigated.

"OMG! My database do 10 requests per second (for ex. Postgres do 1000 rps on 
the same hardware with fsync on)
"Forget this, just use BATCH MODE"




Reply via email to