Even without bulk docs...that doesn't sound right..it sounds to me like the library you're using is having a problem with its http client and the time gap in closing a connection and opening a new one. Try looking at how many open connections your app has when it starts to slow down.
I'm guessing in the loop it tries to reuse the same TCP connection but it can't so it just creates a new one..? Then slowly but surely your application begins to die...I would override the TCP dial in the function to count the number of created connections or use lsof. This is often why I stay away from SDK s for couchdb... They often limit you more than they enable you... Your driver should be the http client itself not some library that wraps making the calls for you.. In my opinion Anyways 😀 On Dec 26, 2013 7:11 PM, "Jens Alfke" <[email protected]> wrote: > > On Dec 26, 2013, at 4:51 PM, Vladimir Ralev <[email protected]> > wrote: > > > I have a script that makes a burst of around 100 very fast PUT requests > > mixed with another 300 or so GET requests against a 4GB database. > > Try using _bulk_docs instead, to update multiple docs at once. It should > be more efficient. (It’s what the replicator uses to push revisions.) > > —Jens > >
