I'm trying to split up a monolithic database into smaller ones using filtered 
continuous replications in couchdb 1.2.

I need about 200 of these replications (on a single server) and would like to 
parallelize as much as possible.  Yet, when I do, the cpu load gets very high 
and the system seems to be crawling, replication seems to be slow, and I'm 
seeing timeout and other errors.

How can I best determine what the bottleneck is?

Are there suggestions on how to configure couchdb to handle it better (I've 
increased max_dbs_open to 200)?

How do I best achieve good throughput?

This will be a one-time task, so any large measurement / monitoring effort is 
probably overkill.

Any suggestions are much appreciated (including suggestions for different 
approaches).

Thanks,

Andreas

Reply via email to