As per the message in http://oss.oracle.com/pipermail/ocfs2-users/2009-November/004019.html
Whenever there is a split brain scenario, the node with the lowest number survive, I am sold on that and have no argument against it, but when Node0 crashes, Node1 also takes a nose dive, may I know why? also, on the FAQ says: <quote> QUORUM AND FENCING What is a quorum? A quorum is a designation given to a group of nodes in a cluster which are still allowed to operate on shared storage. It comes up when there is a failure in the cluster which breaks the nodes up into groups which can communicate in their groups and with the shared storage but not between groups. How does OCFS2's cluster services define a quorum? The quorum decision is made by a single node based on the number of other nodes that are considered alive by heartbeating and the number of other nodes that are reachable via the network. A node has quorum when: it sees an odd number of heartbeating nodes and has network connectivity to more than half of them. OR, it sees an even number of heartbeating nodes and has network connectivity to at least half of them *and* has connectivity to the heartbeating node with the lowest node number. </quote> What is the node with the lowest number? does it have to be Node0? or does it mean connectivity to the lowest surviving Node? I setup a test scenario with 4 nodes, 2 nodes mounting the filesystems and 2 other nodes just participating as network members: Node0 and Node1 have network connectivity and mount the filesystems Node3 and Node4 are alive & on the network. During my test (take Node0 down cold turkey) Node1 hung pretty badly, is this something expected?? thanks. enrique sanchez. -- Enrique Sanchez Vela ------------------------------------------ _______________________________________________ Ocfs2-users mailing list Ocfs2-users@oss.oracle.com http://oss.oracle.com/mailman/listinfo/ocfs2-users