Hi.

I have found a way to write a backup script using an event driven environment.

For starters, I have just used the naïve approach to get all document ids and then fetch one at a time.

This works on small databases, but for obvious reasons, the load becomes too big on larger databases, since my script is essentially trying to fetch too many documents at the same time.

I know that I have to throttle the requests, but it turned out that CouchDB doesn't handle the load gracefully. At some point, I just get a "Apache CouchDB starting" entry in the log and at the same time I can see that at least one of the running requests are closed before CouchDB has returned anything.

Is this behaviour intentional? How do I send as many requests as possible without causing the server to restart?

I'd definately prefer if the server could just start responding more slowly.

I am using CouchDB 1.2 (and perls AnyEvent::CouchDB on the client - I gave up on nano).

Regards,

Michael.

Reply via email to