32 seems normal, I would have expected maybe 48 ( 3 pairs, that is 6 * 8 shards 
= 48) but I didn't look too much into the internal replication algorithm. 

But overall that's much's less than the 1000000 as I think I misunderstood that 
it was that many shard-sync checkpoints in a single db (!) from the pervious 
discussion.

Don't delete _local/shard-sync-* docs, as that will mess up with the internal 
replicator between cluster nodes.

But if you there are a lot of _local/<id> ones, you can delete those if the 
replications have finished and don't expect those to be resumed (or if they 
resume they'd reprocess their changes feed). And use_checkpoints = false option 
might be something useful for your use case (it would avoid modifying that 
source db with _local/<id> for each replication from it).

To reduce the size other options could be to try a better compression algorithm 
than the default snappy one (say deflate_6). 


[ Full content available at: https://github.com/apache/couchdb/issues/1621 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to