@adrienverge 

1. If the replications are finished and not expected to be resumed anymore or 
if you don't mind them reprocessing the _changes feed when they are resumed, 
you can delete the `_local/<id>` ones.

2. It's a bit concerning that there are 1M _local/shard-sync-<id> documents. 
Those are created by the internal replicator which synchronizes shard copies in 
a cluster. Say a document write with w=2 updates the shard copy on node1 and 
node2 only. Then internal replicator will notice and eventually replicate the 
update to node3 as well. 

The reason you can't delete them is because internal replication works on 
individual shards and so deleting those via the clustered interface won't 
always work. You can still delete them with via the local port 
`:5986/shards.../`. But it would be good to figure out why there are so many to 
start with.

Wonder if any of these happen in your case:

 * Cluster membership changes so that nodes are added and removed constantly

 * Cluster node names change?

* Any individual shard manipulation taking place, say another tool opens, 
creates, encrypts, moves shard files around? This might include operating on 
shards via the local (:5986) interface

 * Rapid creation / deletion of databases?

(cc: @davisp What do you think ^, this is an odd occurrence. What could cause 
so many shard sync checkpoints)


[ Full content available at: https://github.com/apache/couchdb/issues/1621 ]
This message was relayed via gitbox.apache.org for [email protected]

Reply via email to