Sorry, To be clear
76.74 requests/sec * 378.912 kB/request = 28971.61152 kB/sec 701.9 requests/sec * 4.737 kB/request = 3324.9003 kB/sec Larger files = better throughput. HTTP, TCP, or maybe B+Tree overhead. On Wed, Mar 24, 2010 at 00:06, Randall Leeds <[email protected]> wrote: > If you multiple (#/sec) by file size, are actually getting _better_ > throughput with the larger files. > Do you know if ab command uses HTTP 1.1 pipelining? If not, HTTP > overhead would explain the extra time. > > Your English is very clear. Please let me know if mine is not. > > Regards, > Randall > > On Tue, Mar 23, 2010 at 22:44, Vasili Batareykin <[email protected]> wrote: >> Hello! >> >> i'm trying to use couchdb in dumb case: as a file storage with http access) >> but get performance problem >> simple test: >> curl -X PUT http://localhost:5984/users >> curl -X PUT http://localhost:5984/users/static -d {} >> curl -H "Content-Type: image/jpeg" -X PUT >> http://localhost:5984/users/static/1.jpg?rev=4-41378c97921c2b3bc2a76f4c47f4ee86--data-binary >> @1.jpg >> >> then trying to get that jpg with ab from apache distro: >> >> 76.46 [#/sec] ab -c 10 -n 1000 >> 39.13 [#/sec] ab -n 1000 >> 378912 bytes jpg >> >> 701.90 [#/sec] ab -c 10 -n 1000 >> 338.66 [#/sec] ab -n 1000 >> 4737 bytes jpg >> >> platform: ubuntu-karmic x86, couchdb, kernel from repo >> CouchDB/0.10.0 (Erlang OTP/R13B) >> Quad Xeon 2.66, 1Gb memory >> >> any way to get better #/sec for files >100kb? upgrade to 10.1? upgrade >> erlang? or use x64 platform? >> or this is not for couchdb? >> >> sorry for bad english. >> >
