[ 
https://issues.apache.org/jira/browse/COUCHDB-964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12935799#action_12935799
 ] 

David Orrell commented on COUCHDB-964:
--------------------------------------

Robert, thanks for looking into this. I'm running this on Redhat EL5 on a box 
with a Xeon 3.2 GHz with 4 GB memory.

For me the test shows clearly that for each concurrent connection, when 
downloading a 0.5 GB file, the CouchDB process jumps by almost exactly the same 
amount until the data starts being transferred to the client when it drops back 
down by the same amount.

I'm monitoring this by looking at the RES memory in top.

> Large memory usage downloading attachments
> ------------------------------------------
>
>                 Key: COUCHDB-964
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-964
>             Project: CouchDB
>          Issue Type: Bug
>          Components: HTTP Interface
>    Affects Versions: 1.0.1
>         Environment: Linux, Erlang R14B
>            Reporter: David Orrell
>
> When downloading a large attachment the CouchDB process appears to load the 
> entire attachment in memory before data is sent to the client. I have a 1.5 
> GB attachment and the CouchDB process grows by approximately this amount per 
> client connection.
> For example (as reported by Bram Nejit):
> dd if=/dev/urandom of=/tmp/test.bin count=50000 bs=10240
> Put test.bin as an attachment in a coucdb database
> Run
> for i in {0..50};do curl http://localhost:5984/[test
> database]/[doc_id]/test.bin > /dev/null 2>&1 & done
> This will create 50 curl processes which download from your couchdb. Looking 
> at the memory consumption of couchdb, it seems like it is loading large parts 
> of the file into memory.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to