Hi Dan,
some info about my network setup:

- My bond is used only for VM networking. ovirtmgmt has a dedicated ethernet card.
- I haven't set any ethtool opts.
- Nics on bond specs:
04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
        Subsystem: ASUSTeK Computer Inc. Motherboard
        Flags: bus master, fast devsel, latency 0, IRQ 16
        Memory at df200000 (32-bit, non-prefetchable) [size=128K]
        I/O ports at e000 [size=32]
        Memory at df220000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [c8] Power Management version 2
        Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [e0] Express Endpoint, MSI 00
        Capabilities: [a0] MSI-X: Enable+ Count=5 Masked-
        Capabilities: [100] Advanced Error Reporting
        Kernel driver in use: e1000e

[root@ovirt01 ~]# ifconfig
DMZ: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        txqueuelen 0  (Ethernet)
        RX packets 43546  bytes 2758816 (2.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

LAN_HAW: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        txqueuelen 0  (Ethernet)
        RX packets 2090262  bytes 201078292 (191.7 MiB)
        RX errors 0  dropped 86  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        txqueuelen 0  (Ethernet)
        RX packets 2408059  bytes 456371629 (435.2 MiB)
        RX errors 0  dropped 185  overruns 0  frame 0
        TX packets 118966  bytes 14862549 (14.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

bond0.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        txqueuelen 0  (Ethernet)
        RX packets 2160985  bytes 210157656 (200.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

bond0.3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        txqueuelen 0  (Ethernet)
        RX packets 151195  bytes 185253584 (176.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 118663  bytes 13857950 (13.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp4s0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        txqueuelen 1000  (Ethernet)
        RX packets 708141  bytes 95034564 (90.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16714  bytes 5193108 (4.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xdf200000-df220000

enp5s0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        txqueuelen 1000  (Ethernet)
        RX packets 1699934  bytes 361339105 (344.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 102252  bytes 9669441 (9.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 17  memory 0xdf100000-df120000

enp6s1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        txqueuelen 1000  (Ethernet)
        RX packets 2525232  bytes 362345893 (345.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 388452  bytes 208145492 (198.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 0  (Local Loopback)
        RX packets 116465661  bytes 1515059255942 (1.3 TiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 116465661  bytes 1515059255942 (1.3 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 192.168.1.50 netmask 255.255.255.0 broadcast 192.168.1.255
        txqueuelen 0  (Ethernet)
        RX packets 3784298  bytes 555536509 (529.8 MiB)
        RX errors 0  dropped 86  overruns 0  frame 0
        TX packets 1737669  bytes 1401650369 (1.3 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        txqueuelen 500  (Ethernet)
        RX packets 558574  bytes 107521742 (102.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1316892  bytes 487764500 (465.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        txqueuelen 500  (Ethernet)
        RX packets 42282  bytes 7373007 (7.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 40498  bytes 17598215 (16.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        txqueuelen 500  (Ethernet)
        RX packets 79388  bytes 16807917 (16.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 164596  bytes 183858757 (175.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



 cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: load balancing (xor)
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp4s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Slave queue ID: 0

Slave Interface: enp5s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Slave queue ID: 0


Il 2015-12-30 10:44 Dan Kenigsberg ha scritto:

Hi Jon and Stefano,

We've been testing bond mode 4 with (an earlier)
kernel-3.10.0-327.el7.x86_64 and experienced no such behaviour.

However, to better identify the suspected kernel bug, could you provide
more information regarding your network connectivity?

What is the make of your NICs? Which driver do you use?

Do you set special ethtool opts (LRO with bridge was broken in 7.2.0
kernel if I am not mistaken)?

You have the ovirtmgmt bridge on top of your bond, right?

Can you share your ifcfg*?

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to