On 11/7/2014 08:44, Daniel Dehennin wrote:
Jan Friesse <[email protected]> writes:

If you want to stay with multicast, there are plenty problems you may
be affected with.

- Default ttl for multicast is 1, so if you are using routing, you
should increase is (in corosync.conf it's totem.interface.ttl
option. For debugging, you can try omping, because omping has ttl set
to 64.
The routing is local to the VM using corosync to be sure packets are
outputed by eth1.

- Most routers/clever switches blocks multicast/don't allow multicast
routing. I don't know exact vSwitch behavior
Multicast packets work with Open vSwitch, the VM with only one network
card is part of the corosync ring.

I do not understand why on a VM with two network cards, corosync using
the second card (eth1) and the default route using the fisrt one (eth0),
the corosync can not integrate the ring.

On one working host:

     root@nebula3:~# corosync-quorumtool
     Quorum information
     ------------------
     Date:             Fri Nov  7 14:42:43 2014
     Quorum provider:  corosync_votequorum
     Nodes:            4
     Node ID:          1084811080
     Ring ID:          20508
     Quorate:          Yes
Votequorum information
     ----------------------
     Expected votes:   4
     Highest expected: 4
     Total votes:      4
     Quorum:           3
     Flags:            Quorate WaitForAll LastManStanding
Membership information
     ----------------------
         Nodeid      Votes Name
         1084811078          1 nebula1.eole.lan
         1084811079          1 nebula2.eole.lan
         1084811080          1 nebula3.eole.lan (local)
         1084811118          1 quorum.eole.lan


On the failing node:

     root@one-frontend:~# corosync-quorumtool
     Quorum information
     ------------------
     Date:             Fri Nov  7 14:42:37 2014
     Quorum provider:  corosync_votequorum
     Nodes:            1
     Node ID:          1084811119
     Ring ID:          20264
     Quorate:          No
Votequorum information
     ----------------------
     Expected votes:   3
     Highest expected: 3
     Total votes:      1
     Quorum:           2 Activity blocked
     Flags:            WaitForAll LastManStanding
Membership information
     ----------------------
         Nodeid      Votes Name
         1084811119          1 one-frontend.eole.lan (local)

The “Ring ID” is not the same.

Regards.


I'm not familiar with your VM setup, but I've had a similar issue on a machine where I wanted to send am multicast stream out a 2nd NIC.

The default route on the first NIC caused it to be used for egress of the multicast stream.

To fix it I had to add a route that covered the multicast destination(239.0.10.74) on the desired nic

224.0.0.0/4 covers ALL multicast, you may want to be more specific or you may break anything using multicast on NIC 1

Something like:
ip r add 224.0.0.0/240.0.0.0 dev eth3

will cover all multicast and make it go out eth3

[root@localhost ~]# ip ad ls dev eth3 ; ip r ls dev eth3
2: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:02:a5:4f:5a:d1 brd ff:ff:ff:ff:ff:ff
inet 10.100.0.16/24 brd 10.100.0.255 scope global eth3
inet6 fe80::202:a5ff:fe4f:5ad1/64 scope link
valid_lft forever preferred_lft forever
10.100.0.0/24 proto kernel scope link src 10.100.0.16
224.0.0.0/4 scope link
[root@localhost ~]#
_______________________________________________
Openais mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/openais

Reply via email to