Running couchdb 0.10, 1.0.1 and 1.1.1 over the last couple of years, I got very familiar with the shortcomings of continuous replication. Replication would simply stop without warning at some point, but the replication document's status would remain "triggered". Eventually, I just set up a cron job that periodically wiped out the entire _replicator database and repopulated a new one. It was horrible, but was less of a hassle than writing code to iterate through all the replication relationships and compare the nodes' last-updated-document timestamps for applications that could deal with a few minutes' replication lag.
As I build out a new cluster, I'm curious to know whether it's safe to trust a replication job's reported state under 1.2.0, or if there's another recommended way to go. This isn't a banking or airline booking system, so some replication lag is fine as long as there's eventual consistency. Rgds., etc. -sk
