On Tue, 2016-08-02 at 12:50 +1200, Jim Cheetham wrote: > On Mon, Aug 1, 2016 at 3:58 PM, Adrian <[email protected]> wrote: > > > > A couple of hours downtime in case of failure is acceptable so the > > switch between them can even be done manually by changing the > > settings > > in the firewall, but most likely I will automate the switch and set > > an > > alarm or notification for the event. > So you've already committed to an approach where "the" production > server is completely self-contained, and isn't going to be relying on > other machines like DB servers and so on. Fair enough. > > The "couple of hours downtime" is interesting, now it's beginning to > look like there isn't much data being *created* on the server that > needs to be preserved. Can you clarify that? When the production > server goes down, how old can the state of the standby be, in > comparison? > > If what you have is effectively a publishing platform, rather than an > updating one, then life becomes much easier - all you have to do is > to > make sure that configuration and data changes are made on both > internal machines at the same time (with the added benefit that the > warm spare machine can be designated "test", and you can push changes > to that first, then test to make sure they work; then switch over > from > one to the other and do the changes on what was the prod box before > you started) > > Because you have multiple VMs, you might need to consider automation > within each VM being responsible for updating its partner, rather > than > trying to do it from the host OS. > > If you can expose any updated state on these machines as a simple > filesystem change, than you can't get much better than rsync in an > infinite loop :-) (don't call things like rsync from cron, unless you > also include locking checks to make sure you aren't running multiple > simultaneous copies of the command when things go wrong). > _______________________________________________ > Linux-users mailing list > [email protected] > http://lists.canterbury.ac.nz/mailman/listinfo/linux-users >
For now yes, this particular server is self contained. If either the front or the back end fails, one is no good without the other and separating them would mean doubling resources. Also don't want to touch existing systems. Not yet at least. For the next few months the business continuity is not badly affected if this particular server goes down and the mirror doesn't take over immediately. The current way of doing business will take care of that. Ideally all the data created before the crash will be replicated to the mirror so after the switch the work can resume without loss of data. If things go my way the adoption of this server will be gradual, and as things progress and more systems get migrated/ported/replaced, that "couple of hours" will shrink to something more critical that may involve even the replacement of said server. But only if things go the way I planned for now. I have to keep an open mind here while also balancing some budgets. I have already organised a test and a development environment with separate hardware. Yes, with multiple VMs that is the first thing that springs to mind, replication in pairs with each VM responsible for its mirror. I guess in the end it will be the most cost effective option, providing that I can reduce everything to file system changes. Of which I'm not sure yet at this stage. But if I can, looping a script will be interesting. I do suspect that it will be more than a script for each VM though. And considering all the other applications and system support scripts like data migration and processing, if I go down this type of mirroring I'm looking for a solution to organise various tasks and processes that is self explanatory to a newcomer or myself after a holiday so I don't necessarily rely on self-discipline. If you or anyone else knows of such solution, I'm open to suggestion. Thanks for all the help, Adrian _______________________________________________ Linux-users mailing list [email protected] http://lists.canterbury.ac.nz/mailman/listinfo/linux-users
