Given that CouchDB is a multi-master system, it seems that reads scale gracefully while writes do not -- because N reads among k nodes can be spread as N/k reads per node, while N writes among k nodes requires each of the nodes to perform all N writes.
So what's the best practice on scaling writes? Clearly each node should not be responsible for performing all N writes and we would want to partition the data storage. I was thinking of consistent hashing, but that requires some logic to merge results from views. My best idea so far is to partition documents based on their type, and all documents of a given type exist in the same partition. Is http://guide.couchdb.org/draft/clustering.html current, or is there something else I should be looking at? Anyone have any success stories to share? Guidance would be much appreciated! Thanks! - Mike
