I think you can, though you'll need to evaluate this. We've been working with the latest[1] from 1.2 with reasonable results. There are still issues with occasional hangs on long running, filtered continuous replications and I think also with how attachments are streamed. Performance could and will get better. The replicator touches a lot of the API so I think it's a place where lots of problems surface. For example, using BIFs from the binary module[2] helps ibrowse and couch_http:find_in_binary run a bit faster. When time permits these changes will find their way into CouchDB.
So I'd say give it a shot. YMMV, Bob [1] https://github.com/cloudant/couch_replicator [2] https://github.com/cloudant/ibrowse/commit/cc1f8e84a669 On Oct 4, 2012, at 10:39 AM, Steve Koppelman <[email protected]> wrote: > Running couchdb 0.10, 1.0.1 and 1.1.1 over the last couple of years, I > got very familiar with the shortcomings of continuous replication. > Replication would simply stop without warning at some point, but the > replication document's status would remain "triggered". Eventually, I > just set up a cron job that periodically wiped out the entire > _replicator database and repopulated a new one. It was horrible, but > was less of a hassle than writing code to iterate through all the > replication relationships and compare the nodes' last-updated-document > timestamps for applications that could deal with a few minutes' > replication lag. > > As I build out a new cluster, I'm curious to know whether it's safe to > trust a replication job's reported state under 1.2.0, or if there's > another recommended way to go. This isn't a banking or airline booking > system, so some replication lag is fine as long as there's eventual > consistency. > > Rgds., etc. > > -sk
