On 2012-07-18 15:57, Ulrich Leodolter wrote:
> hi,
>
> after adding a second ring to corosync.conf
> the problem seems to be gone.
>
> after killing corosync the node is fenced by
> the other node. after reboot the cluster is
> fully operational.
>
> is this essential to have at least 2 rings?
>
> maybe there is a network timing problem (but can't see
> error messages)
> the interface on ring 0 (192.168.20.171) is a bridge.
> the interface on ring 1 (10.10.10.171) is normal ethernet interface.
I've seen such things with bonding devices under debian 6.0
try something like:
> auto bond0
> iface bond0 inet static
...
>
bond-mode active-backup
bond-miimon 100
bridge_fd 0
bridge_maxwait 0
Another workaround is a "sleep 10" or similar at the beginning
of the pacemaker script to let bond0 come up.
We always go with 2 rings, even when using a NIC bonding.
Cheers,
Raoul
--
____________________________________________________________________
DI (FH) Raoul Bhatia M.Sc. email. [email protected]
Technischer Leiter
IPAX - Aloy Bhatia Hava OG web. http://www.ipax.at
Barawitzkagasse 10/2/2/11 email. [email protected]
1190 Wien tel. +43 1 3670030
FN 277995t HG Wien fax. +43 1 3670030 15
____________________________________________________________________
_______________________________________________
Pacemaker mailing list: [email protected]
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org