Re: [Linux-cluster] Quorum disk
I had the same issue and I solved it, Just increase quorum check interval. 2 seconds is to less to inform cman about quorum status. I had to increase it to 7 seconds but remember it also influence cman timeout which must be verified. Best regards, Tomek On Feb 17, 2009, at 9:12 PM, Hunt, Gary wrote: Having an issue with my 2 node cluster. Think it is related to the quorum disk. 2 node RHEL 5.3 cluster with quorum disk. Virtual servers running on each node. Whenever node1 takes over the master role in qdisk it looses quorum and restarts all the virtual servers. It does regain quorum a few seconds later. If node1 is already the master and I fail node2; things work as expected. Node2 doesn’t seem to have a problem taking over master role. Whenever node1 needs to take over master role the cluster looses quorum. Here is my cluster.conf. Any suggestions on what may be causing this? ?xml version=1.0? cluster alias=xencluster config_version=13 name=xencluster fence_daemon clean_start=0 post_fail_delay=0 post_join_delay=3/ clusternodes clusternode name=ricci2b.gallup.com nodeid=2 votes=1 fence method name=1 device name=ricci2b/ /method /fence /clusternode clusternode name=ricci1b.gallup.com nodeid=1 votes=1 fence method name=1 device name=ricci1b/ /method /fence /clusternode /clusternodes cman expected_votes=3 two_node=0/ fencedevices fencedevice agent=fence_ipmilan ipaddr=172.30.3.110 login= name=ricci1b passwd=xx/ fencedevice agent=fence_ipmilan ipaddr=172.30.3.140 login= name=ricci2b passwd=xx/ /fencedevices rm failoverdomains/ resources/ vm autostart=1 exclusive=0 name=rhel_full path=/xenconfigs recovery=restart/ vm autostart=1 exclusive=0 name=rhel_para path=/xenconfigs recovery=restart/ /rm quorumd interval=2 label=quorum_disk_from_ricci1 min_score=1 tko=3 votes=1/ totem consensus=4800 join=60 token=12000 token_retransmits_before_loss_const=20/ /cluster Thanks Gary -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster
Re: [Linux-cluster] SCSI Reservations Red Hat Cluster Suite
Hello, Ryan O'Hara wrote: 4 - Limitations ... - Multipath devices are not currently supported. What is the reason - it is strongly required to use at least two HBA in a SAN network, which is useless when using scsi reservation. Regards, Tomasz Sucharzewski On Fri, 28 Mar 2008 09:20:53 -0500 Ryan O'Hara [EMAIL PROTECTED] wrote: 4 - Limitations In addition to these requirements, fencing by way of SCSI persistent reservations also some limitations. - Multipath devices are not currently supported. -- Tomasz Sucharzewski [EMAIL PROTECTED] -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster
Re: [Linux-cluster] Quorum in xen guests cluster
Hi Agnieszka, Please attach cluster.conf from domU. It should be a matter of properly defined voting (expected_votes) Regards, Tomek Sucharzewski Agnieszka Kukałowicz wrote: Hi, I have a question about quorum in xen guests cluster. I have two node cluster running virtual services - cluster1. Cluster1 has two nodes - d1 and d2. On each node I've configured three xen guests as virtual services (vm1, vm2, vm3 on d1 and vm4,vm5, vm6 on d2 ). All of the guests are part of the other cluster - cluster2. My problem is when one of the physical machines is down (for example d2) and the cluster2 says that it is not quorate because there are only 3 of 6 xen guest running. But for me it is ok, because I have still vm1, vm2, vm3 virtual services running on d1 node and I'd like the cluster2 has quorum even if only 3 of 6 guests are available. Cheers, Agnieszka Kukałowicz -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster