Spurious "checkpoint failure: conflict (are you replicating to yourself?)"
--------------------------------------------------------------------------

                 Key: COUCHDB-1359
                 URL: https://issues.apache.org/jira/browse/COUCHDB-1359
             Project: CouchDB
          Issue Type: Bug
          Components: Replication
    Affects Versions: 1.1.1
         Environment: Centos 5.6/x64 - spidermonkey 1.8.5, couch 1.1.1 patched 
for COUCHDB-1333 and COUCHDB-1340
            Reporter: Alex Markham


I'm seeing these errors in the log when couch just stops replicating (even 
though it appears in _active_tasks it doesn't checkpoint again, even with 
_replicate being called every 5 mins)
It seems to occur when replicating from a couch 1.1.1 (I have seen it on 1.0.3 
machines replicating from 1.1.1)

It definitely is not replicating to itself, but I suspect it is a problem in 
PUTing the _local doc on the source db.

log here (snipped from host33 couch.log): 
http://www.friendpaste.com/3FLgRFzOEAkkKazLbc7Jgw 
for that log our replication cron does an ssh to host33, then curls it to 
replicate from host01 to the database (with no host specified) as coninuous 
pull replication


We have occasionally seen slow PUTing of documents on that database (and only 
that database) which can take upwards of 10 seconds (via futon or our app) as 
it is a creaking database that has a scarred history of documents that contain 
many (thousands) of conflicts.
Could this occasional slow PUT manifest itself as this error in the log?

As a workaround to keep replication flowing, would it restart this replication 
id if the curl called the cancelling of the replication ("cancel":true) 
followed by the starting of replication?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to