On 2013-07-08T10:13:50, Digimer <li...@alteeve.ca> wrote: > >While in general I agree, the above failure case is not likely with > >DRBD. > > > It was one example.
Yes, but the use case here happened to be drbd, and thus replicated (not shared) storage. > You are right though, the "good" node would disconnect, > so the result would be a split-brain. Not necessarily, if an automatic recovery policy is configured. > Still a poor outcome easily avoided with fencing. True - but it is also true that there are scenarios where fencing (in the traditional sense; effectively, the fact that each DRBD copy is independent does provide some form of IO isolation) isn't an option, and where possibly rolling back a transaction (worst case for drbd, I'd wager) is not considered critical. Regards, Lars -- Architect Storage/HA SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org