RE: Question regarding replication

2016-10-17 Thread Simon Keary

Thanks Jan,

I guess we'll start looking at switching over to v2 and using a cluster. Just 
to confirm, the approach I mentioned for doing the server updates in a cluster 
environment is the right way to go? (adding new patched servers into the 
cluster and then removing old unpatched ones)

Thanks,
Simon



-Original Message-
From: Jan Lehnardt [mailto:j...@apache.org] 
Sent: Monday, 17 October 2016 6:42 PM
To: user@couchdb.apache.org
Subject: Re: Question regarding replication

Heya,

at this point, we’d recommend a CouchDB 2.0 cluster to do the seamless updates 
on the server side.

There are some details with how replication state is tracked that can trip you 
up in the 1.x scenario. Most commonly then forcing a from-scratch sync for all 
clients (which wouldn’t re-submit any data, just check, one by one, if all docs 
are in the right place, so that’d take a while).

Best
Jan
--



> On 17 Oct 2016, at 12:30, Simon Keary <ske...@immersivetechnologies.com> 
> wrote:
> 
> 
> Hi All,
> 
> I have a question regarding replication that hopefully someone can answer:
> 
> In our scenario we have an Internet facing CouchDB 1.6 server (e.g. 
> db.example.com). We also have a number of desktop/client machines that have 
> CouchDB installed. They are configured to continuously replicate, in both 
> directions, a database on db.example.com so the database can be used when 
> offline. Occassionally we want to apply OS patches to the Internet facing 
> CouchDB server. To do this in a safe way we assumed that we could:
> 
> * Create a replacement Internet facing CouchDB server (B) with the required 
> OS patches and test it.
> * Replicate from the current Internet facing CouchDB server (A) to B.
> * Once replication is finished assign the public IP for db.example.com from A 
> to B. We're using an AWS elastic IP to do this.
> 
> What we've realised is that the last sequence number for the database in B  
> will generally be different (lower) to the one for the database in A. This 
> means that after we switch over the DNS the sequence number for the database 
> at db.example.com will change. What we're not sure of is whether this will 
> break the active replication to the client machines? From what we understand 
> we believe this will break the changes feed requests (at least temporarily) 
> that are internally called by the CouchDB replication functionality. However 
> it's not clear whether the checkpointing feature will fix this when the next 
> checkpoint occurs. i.e. the checkpoint fails and then CouchDB figures out 
> where to restart replication from by reading the replication history 
> collection. Is this how it works?
> 
> I think the better way to do what we're trying to do (seamless upgrades) 
> would be to use a CouchDB 2 cluster and to put the new upgraded server(s) in 
> the cluster and then remove the old one(s). Is this the case? Ideally though 
> we'd like to put off switching to CouchDB 2 for a little while so am hoping 
> that the strategy we were planning for 1.6, or some other mechanism, would 
> work. Is there a better way to do this?
> 
> Thanks in advance!
> Simon
> 
> 
> 
> 
> Disclaimer:
> This message contains confidential information and is intended only for the 
> individual(s) named. If you are not the named addressee you should not 
> disseminate, distribute or copy this email. Please immediately delete it and 
> all copies of it from your system, destroy any hard copies of it, and notify 
> the sender. Email transmission cannot be guaranteed to be secure or 
> error-free as information could be intercepted, corrupted, lost, destroyed, 
> arrive late or incomplete, or contain viruses. To the maximum extent 
> permitted by law, Immersive Technologies Pty. Ltd. does not accept liability 
> for any errors or omissions in the contents of this message which arise as a 
> result of email transmission.

-- 
Professional Support for Apache CouchDB:
https://neighbourhood.ie/couchdb-support/



Re: Question regarding replication

2016-10-17 Thread Jan Lehnardt
Heya,

at this point, we’d recommend a CouchDB 2.0 cluster to do the seamless updates 
on the server side.

There are some details with how replication state is tracked that can trip you 
up in the 1.x scenario. Most commonly then forcing a from-scratch sync for all 
clients (which wouldn’t re-submit any data, just check, one by one, if all docs 
are in the right place, so that’d take a while).

Best
Jan
--



> On 17 Oct 2016, at 12:30, Simon Keary  
> wrote:
> 
> 
> Hi All,
> 
> I have a question regarding replication that hopefully someone can answer:
> 
> In our scenario we have an Internet facing CouchDB 1.6 server (e.g. 
> db.example.com). We also have a number of desktop/client machines that have 
> CouchDB installed. They are configured to continuously replicate, in both 
> directions, a database on db.example.com so the database can be used when 
> offline. Occassionally we want to apply OS patches to the Internet facing 
> CouchDB server. To do this in a safe way we assumed that we could:
> 
> * Create a replacement Internet facing CouchDB server (B) with the required 
> OS patches and test it.
> * Replicate from the current Internet facing CouchDB server (A) to B.
> * Once replication is finished assign the public IP for db.example.com from A 
> to B. We're using an AWS elastic IP to do this.
> 
> What we've realised is that the last sequence number for the database in B  
> will generally be different (lower) to the one for the database in A. This 
> means that after we switch over the DNS the sequence number for the database 
> at db.example.com will change. What we're not sure of is whether this will 
> break the active replication to the client machines? From what we understand 
> we believe this will break the changes feed requests (at least temporarily) 
> that are internally called by the CouchDB replication functionality. However 
> it's not clear whether the checkpointing feature will fix this when the next 
> checkpoint occurs. i.e. the checkpoint fails and then CouchDB figures out 
> where to restart replication from by reading the replication history 
> collection. Is this how it works?
> 
> I think the better way to do what we're trying to do (seamless upgrades) 
> would be to use a CouchDB 2 cluster and to put the new upgraded server(s) in 
> the cluster and then remove the old one(s). Is this the case? Ideally though 
> we'd like to put off switching to CouchDB 2 for a little while so am hoping 
> that the strategy we were planning for 1.6, or some other mechanism, would 
> work. Is there a better way to do this?
> 
> Thanks in advance!
> Simon
> 
> 
> 
> 
> Disclaimer:
> This message contains confidential information and is intended only for the 
> individual(s) named. If you are not the named addressee you should not 
> disseminate, distribute or copy this email. Please immediately delete it and 
> all copies of it from your system, destroy any hard copies of it, and notify 
> the sender. Email transmission cannot be guaranteed to be secure or 
> error-free as information could be intercepted, corrupted, lost, destroyed, 
> arrive late or incomplete, or contain viruses. To the maximum extent 
> permitted by law, Immersive Technologies Pty. Ltd. does not accept liability 
> for any errors or omissions in the contents of this message which arise as a 
> result of email transmission.

-- 
Professional Support for Apache CouchDB:
https://neighbourhood.ie/couchdb-support/