29.12.2014 18:30, Ondrej Zajicek пишет:
On Sat, Dec 27, 2014 at 03:50:07AM +0200, Andrew wrote:
Hi all.

I tried to run multiple OSPF instances in BIRD for VRF, other clients -
quagga machines that listens both vlans and see each other.

BIRD sends packets from xx.xx.132.7 (which has mask /32 and is set on lo) to
vlan3, but saw neighbors; other neighbors don't saw bird ('cause it sends
packets from wrong ip)

What's wrong with brird/config?
Config seems OK (although having 'networks' block in OSPF areas is
useless for OSPF protocols with only one area). Even BIRD state seems OK
(show ospf interface returns proper IP address entry).

What BIRD version are you using, on which OS?

Is it the Linux/uclibc you discussed before?
Yes, it is (it's LEAF distro, 5.1.x branch). After forcing primary address in config it starts to work.

What is returned by 'birdc show interfaces', 'ip addr list' and 'ip route list'?

'birdc show ifaces':
bird> show interfaces
lo up (index=1)
    MultiAccess AdminUp LinkUp Loopback Ignored MTU=65536
    10.255.0.12/32 (Primary, scope site)
    x.x.132.7/32 (Unselected, scope univ)
    127.0.0.1/8 (Unselected, scope host)
    x.x.x.x/32 (Unselected, scope univ)
dummy0 DOWN (index=2)
    MultiAccess Broadcast Multicast AdminDown LinkDown MTU=1500
eth0 DOWN (index=3)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
eth1 DOWN (index=4)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
bond0 DOWN (index=5)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
bond1 DOWN (index=6)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
ifb0 DOWN (index=7)
    MultiAccess Broadcast Multicast AdminDown LinkDown MTU=1500
ifb1 DOWN (index=8)
    MultiAccess Broadcast Multicast AdminDown LinkDown MTU=1500
eth2 DOWN (index=9)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
eth3 DOWN (index=10)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
vlan2 up (index=11)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
    192.168.255.190/27 (Primary, scope site)
vlanxxxx up (index=12)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
    x.x.x.x/30 (Primary, opposite x.x.x.x, scope univ)
vlan2662 up (index=13)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
    x.x.x.x/30 (Primary, opposite x.x.x.x, scope univ)
vlan3 up (index=14)
    MultiAccess Broadcast Multicast AdminUp LinkUp MTU=1500
    10.255.192.202/24 (Primary, scope site)


# ip addr list:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet x.x.132.7/32 scope global lo:nat
       valid_lft forever preferred_lft forever
    inet 10.255.0.12/32 scope global lo:ns2
       valid_lft forever preferred_lft forever
    inet x.x.x.x/32 scope global lo:pub
       valid_lft forever preferred_lft forever
2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
    link/ether 76:b1:a4:9d:b1:22 brd ff:ff:ff:ff:ff:ff
3: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 00:1b:21:51:2f:2c brd ff:ff:ff:ff:ff:ff
4: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 00:1b:21:51:2f:2c brd ff:ff:ff:ff:ff:ff
5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 00:1b:21:51:2f:2c brd ff:ff:ff:ff:ff:ff
6: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 00:16:31:fb:8d:a3 brd ff:ff:ff:ff:ff:ff
7: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 32
    link/ether 5a:f6:3a:e5:4c:dc brd ff:ff:ff:ff:ff:ff
8: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default qlen 32
    link/ether fe:05:50:ad:d1:06 brd ff:ff:ff:ff:ff:ff
9: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 00:16:31:fb:8d:a3 brd ff:ff:ff:ff:ff:ff
10: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether 00:16:31:fb:8d:a3 brd ff:ff:ff:ff:ff:ff
11: vlan2@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 00:1b:21:51:2f:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.255.190/27 brd 192.168.255.191 scope global vlan2
       valid_lft forever preferred_lft forever
12: vlanxxxx@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 00:16:31:fb:8d:a3 brd ff:ff:ff:ff:ff:ff
    inet x.x.x.x/30 brd 88.81.242.131 scope global vlanxxxx
       valid_lft forever preferred_lft forever
13: vlanxxxx@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 00:16:31:fb:8d:a3 brd ff:ff:ff:ff:ff:ff
    inet x.x.x.x/30 brd 80.91.186.203 scope global vlanxxxx
       valid_lft forever preferred_lft forever
14: vlan3@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 00:1b:21:51:2f:2d brd ff:ff:ff:ff:ff:ff
    inet 10.255.192.202/24 brd 10.255.192.255 scope global vlan3
       valid_lft forever preferred_lft forever

#  ip r|grep -v "bird\|nexthop"
10.255.192.0/24 dev vlan3  proto kernel  scope link  src 10.255.192.202
x.x.x.x/30 dev vlanxxxx  proto kernel  scope link  src x.x.x.x
x.x.x.x/30 dev vlanxxxx  proto kernel  scope link  src x.x.x.x
192.168.255.160/27 dev vlan2  proto kernel  scope link  src 192.168.255.190

The vlanX devices are 802.1q devices on ethX?
Almost right - they are over bonding of eth ifaces.

Does this problem happen even if there is just ospf_world in the configuration?
This is production server, so experiments aren't a good idea. But I can try to run same software in virtual environment (w/o neighbours of course) if you can't reproduce it on test machine.

Last time i saw such problem, it was caused by NAT (netfilter/iptables) which
was active even when the appropriate iptables rule were removed (as it already
has entries in ip_conntrack kernel table). Could you exclude such possiblity?

Yes, I checked this - ICMP packets have right address.

Reply via email to