Hello,

My setup : CS 4.1 (snapshot 2013/04/11)
Advanced network, 3 physical networks (public, guest, mngt/storage)

Guest network interfaces are connected on a switch L2, with trunk enabled on vlan tagged 200-300 (and 199 for untagged packets)

Works fine:

Node 1 : VM + Virtual Router
Node 2 : nothing

I can ping VM since VR (on private network),
I can ping VR since VM (on private network),
I can ping VM on public IP.


Works with >80% ping loses

Node 1 : Virtual Router
Node 2 : VM
I can ping VM since VR (on private network) with alot of lost packets,
I can ping VR since VM (on private network) with a lot of lost packets,
I can ping VM on public IP with a lot of lost packets.

On VM, when I make some "arp -n" concurrent with ping VR, I found that the arp address of VR changes!

ARP address 1 matches with p5p1 / p5p2 / bond2 / guestbr2 (on Node 1)
(note I use IP Bonding, but currently p5p2 isn't cable connected)

ARP address 2 matches with vnet4  (on Node 1)
(note this address is the mac address of VR/eth0)


Can you help me? this is a bug or misconfiguration?
Need add some kernel parameters on hosts related with arp?

Thanks.


Milamber



======

# brctl show
bridge name    bridge id        STP enabled    interfaces
brbond2-234        8000.6805ca1182b2    no        bond2.234
                            vnet4
cloud0        8000.fe00a9fe0333    no        vnet0
                            vnet5
cloudbr0        8000.d4ae52e84796    no        bond0
                            vnet2
                            vnet6
cloudbr1        8000.6805ca118352    no        bond1
                            vnet1
                            vnet3
guestbr2        8000.6805ca1182b2    no        bond2
virbr0        8000.5254008c282f    yes        virbr0-nic


=====

# ip a | grep -B1 82
8: p5p1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2 state UP qlen 1000
    link/ether 68:05:ca:11:82:b2 brd ff:ff:ff:ff:ff:ff
9: p5p2: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq master bond2 state DOWN qlen 1000
    link/ether 68:05:ca:11:82:b3 brd ff:ff:ff:ff:ff:ff
--
18: bond2: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 68:05:ca:11:82:b2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6a05:caff:fe11:82b2/64 scope link
--
19: guestbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 68:05:ca:11:82:b2 brd ff:ff:ff:ff:ff:ff
    inet 172.17.7.13/24 brd 172.17.7.255 scope global guestbr2
    inet6 fe80::6a05:caff:fe11:82b2/64 scope link
--
28: bond2.234@bond2: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 68:05:ca:11:82:b2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6a05:caff:fe11:82b2/64 scope link
--
29: brbond2-234: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 68:05:ca:11:82:b2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::6a05:caff:fe11:82b2/64 scope link

====
# ip a | grep -B1 02
30: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether fe:00:56:47:00:02 brd ff:ff:ff:ff:ff:ff


====


Reply via email to