Hi!
I have a problem with corosync as of SLES11 SP2 (current updates):
Given these routing table:
# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.20.3.62 0.0.0.0 UG 0 0 0 eth0
127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo
172.20.3.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0
172.20.76.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
172.20.77.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
and this ring configuration:
interface {
bindnetaddr: 172.20.0.0
mcastaddr: 239.192.3.9
mcastport: 5405
ringnumber: 0
}
interface {
mcastaddr: 239.192.3.109
mcastport: 5405
bindnetaddr: 192.168.0.0
ringnumber: 1
}
I get:
# corosync-cfgtool -s
Printing ring status.
Local node ID 16777343
RING ID 0
id = 127.0.0.1
status = ring 0 active with no faults
RING ID 1
id = 192.168.0.61
status = ring 1 active with no faults
So why isn't corosync using eth0 as interface? for ring 0? In this
configuration nodes cannot join the cluster. I couldn't find any error or
warning during initial startup regarding the choice of the network
address/interface. I guess there's a bug in it.
corosync-1.4.1-0.13.1
pacemaker-1.1.6-1.29.1
Regards,
Ulrich
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems