On 4 October 2012 21:56, stephen bartell <[email protected]> wrote: > +1 Im curious on this one too. > > Currently we have implemented our own replicator. It'd be nice to move to > couch's own replicator. > > Stephen Bartell > > "The significant problems we face cannot be solved at the same level of > thinking we were at when we created them." -Einstein > > On Oct 4, 2012, at 7:39 AM, Steve Koppelman wrote: > >> Running couchdb 0.10, 1.0.1 and 1.1.1 over the last couple of years, I >> got very familiar with the shortcomings of continuous replication. >> Replication would simply stop without warning at some point, but the >> replication document's status would remain "triggered". Eventually, I >> just set up a cron job that periodically wiped out the entire >> _replicator database and repopulated a new one. It was horrible, but >> was less of a hassle than writing code to iterate through all the >> replication relationships and compare the nodes' last-updated-document >> timestamps for applications that could deal with a few minutes' >> replication lag. >> >> As I build out a new cluster, I'm curious to know whether it's safe to >> trust a replication job's reported state under 1.2.0, or if there's >> another recommended way to go. This isn't a banking or airline booking >> system, so some replication lag is fine as long as there's eventual >> consistency. >> >> Rgds., etc. >> >> -sk >
Are you able to open a jira ticket with what's not been working, and ideally some logs or errors or other information? I'm not a huge replication user myself, but I have heard of reports of leakage in our usage of mochiweb with continuous replication, and possibly SSL. But my recollection might be wrong. Benoit, Paul - ring any bells? A+ Dave
