Hi
I had/have similar problems
I have two nics:
eth0 direct connection without a switch between the two nodes (10.0.0.x) ,
hostnames: node1-direct, node2-direct
eth1 with a switch 192.168.x.x (hostnames: node1, node2)
using the node[12]-direct hostname in a simple cluster.conf a ran in the
same problem like ITec.
replacing the hostnames (-> node[12]) it works. adding altname
to the corresponding clusternode it works:
<altname name="node1-direct" port="5406" mcast="239.192.122.46" />
running: corosync-cfgtool -s # it show the two rings
Printing ring status.
Local node ID 2
RING ID 0
id = 192.168.25.52
status = ring 0 active with no faults
RING ID 1
id = 10.0.0.52
status = ring 1 active with no faults
maybe the helps..
KLor
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/956383
Title:
ccs_config_validate exits with 191 and broken link for cluster.rng
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/redhat-cluster/+bug/956383/+subscriptions
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs