Also on second reading of the email, make sure to leverage loading documents via _bulk_docs. When I load huge numbers of documents I tend to try and load a couple thousand at a time. If an update fails I fall back to a binary search to find the offending record. There's an open request for identifying the offending records.
View updates are admittedly slower than we'd like. There's planned work on making this sort of thing parallelized to automagically fill out multi-node clusters. Last I remember this is a 'probably 1.0' feature though. On Sun, Nov 2, 2008 at 5:23 PM, Ask Bjørn Hansen <[EMAIL PROTECTED]> wrote: > What are the largest known production DBs in CouchDB? > > I'm loading ~3M documents into a database that'll be probably around 10GB > and then grow from there. So not very much data, but inserting and updating > views is much slower than I expected (or thought they were in tests on > earlier versions, is that possible?) > > > > - ask >