It's a code change to increase the chunk size, it's not currently a configuration setting. When I was testing this I increased it to 64k and 128k, it didn't make much difference (it's quite possible I didn't do it correctly, though I did verify that I had larger chunks of attachment data in the file).
B. On Wed, Mar 24, 2010 at 11:13 AM, Vasili Batareykin <[email protected]> wrote: >> I have measured the difference between serving static files from >> apache2 vs. attachments from couchdb. It's always faster to do so via >> apache2, and, on average, couchdb was 2-4 times slower at serving the >> same data as apache2. >> > > in my case nginx vs couchdb 10x slowdown. on static files. > > >> >> This doesn't surprise me. Attachments are interleaved in chunks so >> that concurrent writers do not block each others progress (since only >> one process can append to a file at a time). So reading a file from >> couchdb involves seeking to those chunks (and they're small, 4k or >> less) and then sending them. As Randall points out, apache2 can just >> call sendfile(). >> > > couchdb eats CPU not io. on small (two attachments, 800kb file size on disk) > db. > i don't know about real work(#/sec) with 50k+ files and 20Gb size of pics > (80Gb in db?) > > can i change size of this chunk? to 32,64kb or 128? yes i know about > overhead on disk. >
