Ok, that helps. The most natural way to do this in CouchDB 2.x is to set up a 3 node cluster. CouchDB will automatically store a replica of every document on each of those 3 nodes. If a node fails, the cluster will continue to function in the degraded state.
The easiest way to restore the cluster at this point is to replace the failed node with a new one that has the same "node name" (e.g. Host name). In that case CouchDB will automatically replicate the data to the new node to restore the original replica level. In the interest of full disclosure, the functionality I'm describing is currently part of a release candidate undergoing final testing. I do think it's the best option for your use case, though -- the critical bits have been running in production at Cloudant for many years now. Cheers, Adam > On Jul 28, 2016, at 10:22 PM, Ben Adams <b...@ethansolutions.com> wrote: > > Hi Adam, > > Yea looking at HA. > > I was thinking was having a Node cluster of X (3) each running there own > couchdb instance. If node B dies for X reason. Having any outside alert see > there are only 2 functional nodes. Have a job start to create a new node. > Auto configure couchdb to sync up and be part of cluster. > > Just not sure about data lost? > > Or if people are setting up all nodes to run off the same data set on a > shared drive vs each node having their own copy of part of the data set. > > Thanks > > Ben > > >> On 07/28/2016 10:15 PM, Adam Kocoloski wrote: >> Hi Ben, >> >> I'm not 100% certain I understand what you're looking for. Are you looking >> to demonstrate the HA capabilities of a CouchDB 2.0 cluster? What sort of >> "replacement" do you have in mind? One where the data previously hosted on >> that node was lost and needs to be replicated back into the replacement node? >> >> Adam >> >>> On Jul 28, 2016, at 8:35 PM, Ben Adams <b...@ethansolutions.com> wrote: >>> >>> Hello I'm looking at implementing couchdb in a project I'm working on. But >>> can't find a demo of some taking down a node and replacing it without >>> downtime. Showing how this works. Thanks for any pointers. >>> >>> -- Ben >>> >> > >