[ 
https://issues.apache.org/jira/browse/COUCHDB-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924851#comment-15924851
 ] 

ASF subversion and git services commented on COUCHDB-3277:
----------------------------------------------------------

Commit fb77cbc463caa573a51f971243a5cb18ee8b2e9a in couchdb-couch-replicator's 
branch refs/heads/63012-scheduler from [~vatamane]
[ 
https://git-wip-us.apache.org/repos/asf?p=couchdb-couch-replicator.git;h=fb77cbc
 ]

Use mem3 in couch_multidb_changes to discover _replicator shards

This is a forward-port of a corresponding commit in master:

"Use mem3 to discover all _replicator shards in replicator manager"

https://github.com/apache/couchdb-couch-replicator/commit/b281d2bb320ed6e6d8226765315a40637ba91a46

This wasn't a direct merge as replicator shard discovery and traversal is 
slightly
different.

`couch_multidb_changes` is more generic and takes a db suffix and callback
module. So `<<"_replicator">>` is not hard-coded in multidb changes module.

`couch_replicator_manager` handles local `_replicator` db by directly
creating it and launching a changes feed for it. In the scheduling replicator
creation is separate from monitoring. The logic is handled in `scan_all_dbs`
function where first thing it always checks if there is a local db present
matching the suffix, if so a `{resume_scan, DbName}` is sent to main process.
Due to supervisor order by the time that code runs a local replicator db
will be created already.

COUCHDB-3277


> Replication manager when it finds _replicator db shards which are not part of 
> a mem3 db
> ---------------------------------------------------------------------------------------
>
>                 Key: COUCHDB-3277
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-3277
>             Project: CouchDB
>          Issue Type: Bug
>            Reporter: Nick Vatamaniuc
>
> Currently replication manager scans the file system for shards which have a 
> {{_replicator}} suffix when it starts up and discovers all replicator dbs.
> However, in the case if there is a {{_replicator}} shard without a 
> corresponding mem3 dbs db entry, replicator manager crashes.
> These "orphan" replicator shards could be created during db creation, as 
> shards are created first then an entry in the {{dbs}} db is added. Or if 
> there is a move or backup process which might leave some db shards around.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to