On 2010-01-18 11:41, Colin wrote:
> Hi All,
> 
> we are currently looking at nearly the same issue, in fact I just
> wanted to start a similarly titled thread when I stumbled over these
> messages…
> 
> The setup we are evaluating is actually a 2*N-node-cluster, i.e. two
> slightly separated sites with N nodes each. The main difference to an
> N-node-cluster is that a failure of one of the two groups of nodes
> must be considered a single failure event [against which the cluster
> must protect, e.g. loss of power at one site].

Colin,

the current approach is to utilize 2 Pacemaker clusters, each highly
available in its own right, and employing manual failover. As described
here:

http://www.drbd.org/users-guide/s-pacemaker-floating-peers.html#s-pacemaker-floating-peers-site-fail-over

May be combined with DRBD resource stacking, obviously.

Given the fact that most organizations currently employ a non-automatic
policy to site failover (as in, "must be authorized by J. Random Vice
President"), this is a sane approach that works for most. Automatic
failover is a different matter, not just with regard to clustering
(where neither Corosync nor Pacemaker nor Heartbeat currently support
any concept of "sites"), but also in terms of IP address failover,
dynamic routing, etc.

Cheers,
Florian

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to