The configuration you are trying to build, 2 cluster nodes (1 vote each) plus a quorum disk 1 vote (making a total expected votes= 3) must remain up if you loose 1 of the members (as long as the remaining node still accesses the quorum disk) because there are still 2 active votes (1 remaining node + 1 quorum disk) = 2 > expected_votes/2.
The Quorum (majority) must be greater (absolutely greater >) than expected_votes/2 (51% or greater) in order to service to continue. 2010/9/27 Bennie R Thomas <[email protected]> > Try setting your expected votes to 2 or 1.. > > Your Cluster is hanging with one node because it want's 3 votes. > > > > From: Brem Belguebli <[email protected]> To: linux clustering < > [email protected]> Date: 09/25/2010 10:30 AM Subject: Re: > [Linux-cluster] porblem with quorum at cluster boot Sent by: > [email protected] > ------------------------------ > > > > On Fri, 2010-09-24 at 12:52 -0400, [email protected] wrote: > > > > I think you still need two_node="1" in your conf file if you want a > > single node to become quorate. > > > two_nodes=1 is only valid if you do not have a quorum disk. > > > [email protected] wrote on 09/24/2010 12:38:17 PM: > > > > > hello, > > > > > > I have a 2 node cluster with qdisk quorum partition; > > > > > > each node has 1 vote and the qdisk has 1 vote too; in cluster.conf > > I > > > have this explicit declaration: > > > <cman expected_votes="3" two_node="0"\> > > > > > > when I have both 2 nodes active cman_tool status tell me this: > > > > > > Version: 6.1.0 > > > Nodes: 2 > > > Expected votes: 3 > > > Quorum device votes: 1 > > > Total votes: 3 > > > Node votes: 1 > > > Quorum: 2 > > > > > > then, if I power off a node these value, as expected, changed this > > way: > > > Nodes: 1 > > > Total votes: 2 > > > > > > and the cluster is still quorate and functional. > > > > > > the problem is if I power off both the node and them power on only > > one > > > of them: in this case the single node does not quorate and the > > cluster > > > does not start: I have to power on both the node to have the > > cluster > > > (and services on the cluster) working. > > > > > > I'd like the cluster can work (and boot) even with a single node > > (ie, if > > > one of the node has hw failure and is down I still want to be able > > to > > > reboot the working node and have it booting correctly the cluster) > > > > > > any hints? (thank's for reading all this) > > > > > > -- > > > bye, > > > emilio > > > > > > -- > > > Linux-cluster mailing list > > > [email protected] > > > https://www.redhat.com/mailman/listinfo/linux-cluster > > -- > > Linux-cluster mailing list > > [email protected] > > https://www.redhat.com/mailman/listinfo/linux-cluster > > > -- > Linux-cluster mailing list > [email protected] > https://www.redhat.com/mailman/listinfo/linux-cluster > > > -- > Linux-cluster mailing list > [email protected] > https://www.redhat.com/mailman/listinfo/linux-cluster >
-- Linux-cluster mailing list [email protected] https://www.redhat.com/mailman/listinfo/linux-cluster
