On Tue, 26 Feb 2013, James Harper <[email protected]> wrote:
> I think drbd is going to be the best way to go, especially as it's part of
> Linux these days. My plan is:

Last time I tried DRBD it was killing my systems.  It seems that the default 
configuration for DRBD is to reboot a node if certain failure conditions occur 
- which can be triggered by network problems.  I never managed to get it to 
stop doing that.

In the default mode of operation DRBD writes everything synchronously to the 
secondary system, if the link between the systems is slow and your primary 
system is doing synchronous writes (database server or mail server) then it's 
going to suck.  DRBD supports an asynchronous mode which due to bugs was 
slower than the synchronous mode in my tests with a local GigE network, maybe 
it's asynchronous support wouldn't look as bad with a slow network.

http://etbe.coker.com.au/2012/02/08/more-drbd-performance-tests/

Also in my tests DRBD when the secondary isn't connected gave performance 
that's suspiciously similar to the performance of a non-DRBD system with Ext4 
mounted with the barrier=0 option.  Presumably this means that data on a DRBD 
system will be at the same risk as with barrier=0 on a power failure.

On Tue, 26 Feb 2013, "Trent W. Buck" <[email protected]> wrote:
> What I probably WOULD try is to lvm snapshot a day before, and sync
> that.  It will be incomplete and incoherent, but you don't care because
> on the day you rsync --only-write-batch against the snapshot and then
> upload only the diff and apply it.  Since only the changed blocks from
> the last 24h have changed, that ought to reduce the downtime.

You mean running rsync on a block device?  Is that even possible?

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/
_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to