On Tue, Oct 26, 2010 at 12:59 PM, David Quenzler <[email protected]> wrote: > Other than single point of control, there is no reason multiple > clusters could not be used. > What would the best practice be?
For me the simpler is better and again for me that would be 2 clusters with a simple configuration. As for single point of control as far as I know GUI can connect to the remote cluster and control it. BTW, it would be good if CRM shell could do the same but I doubt that that's possible. > > On 10/26/10, Serge Dubrouski <[email protected]> wrote: >> On Tue, Oct 26, 2010 at 12:24 PM, David Quenzler <[email protected]> wrote: >>> How about something like... >>> >>> Cluster with 4 nodes: node1 node2 node3 node4 >>> >>> ResourceA runs only on node1 and node2, never on node3 or node4 >>> ResourceB runs only on node3 and node4, never on node1 or node2 >> >> Just curios, what is the point of building a 4-node cluster in this >> case instead of 2 clusters of 2 nodes each? >> >>> >>> On 10/26/10, Pavlos Parissis <[email protected]> wrote: >>>> On 25 October 2010 19:50, David Quenzler <[email protected]> wrote: >>>> >>>>> Is there a way to limit failover behavior to a subset of cluster nodes >>>>> or pin a resource to a node? >>>>> >>>>> >>>> Yes, there is a way. >>>> >>>> Make sure you have a asymmetric cluster by setting symmetric-cluster to >>>> false >>>> and then configure accordingly your location constraints in order to have >>>> the failover domains as you wish. >>>> >>>> Here is en example from my cluster where I have 3 nodes and 2 resource >>>> group. Each resource group have unique primary node but both of them have >>>> shared secondary node. >>>> >>>> location PrimaryNode-pbx_service_01 pbx_service_01 200: node-01 >>>> location PrimaryNode-pbx_service_02 pbx_service_02 200: node-02 >>>> >>>> location SecondaryNode-pbx_service_01 pbx_service_01 10: node-03 >>>> location SecondaryNode-pbx_service_02 pbx_service_02 10: node-03 >>>> >>>> >>>> Cheers, >>>> Pavlos >>>> >>> >>> _______________________________________________ >>> Pacemaker mailing list: [email protected] >>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>> >>> Project Home: http://www.clusterlabs.org >>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >>> Bugs: >>> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker >>> >> >> >> >> -- >> Serge Dubrouski. >> >> _______________________________________________ >> Pacemaker mailing list: [email protected] >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: >> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker >> > > _______________________________________________ > Pacemaker mailing list: [email protected] > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker > -- Serge Dubrouski. _______________________________________________ Pacemaker mailing list: [email protected] http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
