Hi,

I discovered an issue with my LACP configuration and i am having trouble 
figuring it out.  I am running 2 Dell Powered 610's with 4 broadcomm nics.  
I am trying to bond them together, however only one of the nics goes active 
no mater how much traffic i push across the links.

I have spoken to my network admin, and says that the switch ports are 
configured and can only see one active link on the switch.

Thanks
Bryan

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: f4:8e:38:c5:fc:a8
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 1
        Actor Key: 9
        Partner Key: 20
        Partner Mac Address: a4:6c:2a:e5:30:00

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:a8
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: f4:8e:38:c5:fc:a8
    port key: 9
    port priority: 255
    port number: 1
    port state: 61
details partner lacp pdu:
    system priority: 8192
    system mac address: a4:6c:2a:e5:30:00
    oper key: 20
    port priority: 32768
    port number: 25
    port state: 61

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:a9
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: f4:8e:38:c5:fc:a8
    port key: 9
    port priority: 255
    port number: 2
    port state: 5
details partner lacp pdu:
    system priority: 32768
    system mac address: a4:6c:2a:e5:30:00
    oper key: 20
    port priority: 32768
    port number: 73
    port state: 5

Slave Interface: em3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:aa
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: f4:8e:38:c5:fc:a8
    port key: 9
    port priority: 255
    port number: 3
    port state: 5
details partner lacp pdu:
    system priority: 32768
    system mac address: a4:6c:2a:e5:30:00
    oper key: 20
    port priority: 32768
    port number: 26
    port state: 5

Slave Interface: em4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:ab
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
    oper key: 20
    port priority: 32768
    port number: 73
    port state: 5

Slave Interface: em3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:aa
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: f4:8e:38:c5:fc:a8
    port key: 9
    port priority: 255
    port number: 3
    port state: 5
details partner lacp pdu:
    system priority: 32768
    system mac address: a4:6c:2a:e5:30:00
    oper key: 20
    port priority: 32768
    port number: 26
    port state: 5

Slave Interface: em4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: f4:8e:38:c5:fc:ab
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
    system priority: 65535
    system mac address: f4:8e:38:c5:fc:a8
    port key: 9
    port priority: 255
    port number: 4
    port state: 5
details partner lacp pdu:
    system priority: 32768
    system mac address: a4:6c:2a:e5:30:00
    oper key: 20
    port priority: 32768
    port number: 74
    port state: 5
Apr 20 10:51:04 vm-host-colo-2 systemd: Stopping LSB: Bring up/down 
networking...
Apr 20 10:51:04 vm-host-colo-2 kernel: DMZ: port 1(bond0.10) entered disabled 
state
Apr 20 10:51:04 vm-host-colo-2 network: Shutting down interface DMZ:  [  OK  ]
Apr 20 10:51:04 vm-host-colo-2 kernel: Internal-Dev: port 1(bond0.30) entered 
disabled state
Apr 20 10:51:04 vm-host-colo-2 network: Shutting down interface Internal-Dev:  
[  OK  ]
Apr 20 10:51:05 vm-host-colo-2 kernel: Lab: port 1(bond0.40) entered disabled 
state
Apr 20 10:51:05 vm-host-colo-2 network: Shutting down interface Lab:  [  OK  ]
Apr 20 10:51:05 vm-host-colo-2 kernel: Server-Net: port 1(bond0.20) entered 
disabled state
Apr 20 10:51:05 vm-host-colo-2 network: Shutting down interface Server-Net:  [  
OK  ]
Apr 20 10:51:05 vm-host-colo-2 kernel: Workstation: port 1(bond0.50) entered 
disabled state
Apr 20 10:51:05 vm-host-colo-2 network: Shutting down interface Workstation:  [ 
 OK  ]
Apr 20 10:51:05 vm-host-colo-2 kernel: ovirtmgmt: port 1(bond0) entered 
disabled state
Apr 20 10:51:05 vm-host-colo-2 network: Shutting down interface ovirtmgmt:  [  
OK  ]
Apr 20 10:51:06 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0.10: 
link becomes ready
Apr 20 10:51:06 vm-host-colo-2 kernel: device bond0.10 left promiscuous mode
Apr 20 10:51:06 vm-host-colo-2 kernel: DMZ: port 1(bond0.10) entered disabled 
state
Apr 20 10:51:06 vm-host-colo-2 network: Shutting down interface bond0.10:  [  
OK  ]
Apr 20 10:51:06 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 
bond0.102: link becomes ready
Apr 20 10:51:06 vm-host-colo-2 network: Shutting down interface bond0.102:  [  
OK  ]
Apr 20 10:51:06 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0.20: 
link becomes ready
Apr 20 10:51:06 vm-host-colo-2 kernel: device bond0.20 left promiscuous mode
Apr 20 10:51:06 vm-host-colo-2 kernel: Server-Net: port 1(bond0.20) entered 
disabled state
Apr 20 10:51:06 vm-host-colo-2 network: Shutting down interface bond0.20:  [  
OK  ]
Apr 20 10:51:06 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0.30: 
link becomes ready
Apr 20 10:51:06 vm-host-colo-2 kernel: device bond0.30 left promiscuous mode
Apr 20 10:51:06 vm-host-colo-2 kernel: Internal-Dev: port 1(bond0.30) entered 
disabled state
Apr 20 10:51:06 vm-host-colo-2 network: Shutting down interface bond0.30:  [  
OK  ]
Apr 20 10:51:07 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0.40: 
link becomes ready
Apr 20 10:51:07 vm-host-colo-2 kernel: device bond0.40 left promiscuous mode
Apr 20 10:51:07 vm-host-colo-2 kernel: Lab: port 1(bond0.40) entered disabled 
state
Apr 20 10:51:07 vm-host-colo-2 network: Shutting down interface bond0.40:  [  
OK  ]
Apr 20 10:51:07 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0.50: 
link becomes ready
Apr 20 10:51:07 vm-host-colo-2 kernel: device bond0.50 left promiscuous mode
Apr 20 10:51:07 vm-host-colo-2 kernel: Workstation: port 1(bond0.50) entered 
disabled state
Apr 20 10:51:07 vm-host-colo-2 network: Shutting down interface bond0.50:  [  
OK  ]
Apr 20 10:51:07 vm-host-colo-2 kernel: bond0: Removing slave em1
Apr 20 10:51:07 vm-host-colo-2 kernel: bond0: Removing an active aggregator
Apr 20 10:51:07 vm-host-colo-2 kernel: bond0: Releasing active interface em1
Apr 20 10:51:07 vm-host-colo-2 kernel: bond0: the permanent HWaddr of em1 - 
f4:8e:38:c5:fc:a8 - is still in use by bond0 - set the HWaddr of em1 to a 
different address to avoid conflicts
Apr 20 10:51:07 vm-host-colo-2 kernel: bond0: first active interface up!
Apr 20 10:51:07 vm-host-colo-2 kernel: device em1 left promiscuous mode
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: Removing slave em2
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: Releasing backup interface em2
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: first active interface up!
Apr 20 10:51:08 vm-host-colo-2 kernel: device em2 left promiscuous mode
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: Removing slave em3
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: Releasing backup interface em3
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: first active interface up!
Apr 20 10:51:08 vm-host-colo-2 kernel: device em3 left promiscuous mode
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: Removing slave em4
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: Removing an active aggregator
Apr 20 10:51:08 vm-host-colo-2 kernel: bond0: Releasing backup interface em4
Apr 20 10:51:08 vm-host-colo-2 kernel: device em4 left promiscuous mode
Apr 20 10:51:08 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: 
link becomes ready
Apr 20 10:51:08 vm-host-colo-2 kernel: device bond0 left promiscuous mode
Apr 20 10:51:08 vm-host-colo-2 kernel: ovirtmgmt: port 1(bond0) entered 
disabled state
Apr 20 10:51:08 vm-host-colo-2 network: Shutting down interface bond0:  [  OK  ]
Apr 20 10:51:09 vm-host-colo-2 network: Shutting down loopback interface:  [  
OK  ]
Apr 20 10:51:09 vm-host-colo-2 systemd: Starting LSB: Bring up/down 
networking...
Apr 20 10:51:09 vm-host-colo-2 network: Bringing up loopback interface:  [  OK  
]
Apr 20 10:51:09 vm-host-colo-2 kernel: bond0: Setting MII monitoring interval 
to 100
Apr 20 10:51:09 vm-host-colo-2 kernel: bond0: Setting xmit hash policy to 
layer2+3 (2)
Apr 20 10:51:09 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_UP): bond0: link 
is not ready
Apr 20 10:51:09 vm-host-colo-2 kernel: 8021q: adding VLAN 0 to HW filter on 
device bond0
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Setting MII monitoring interval 
to 100
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Setting xmit hash policy to 
layer2+3 (2)
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Adding slave em1
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Enslaving em1 as a backup 
interface with a down link
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Adding slave em2
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Enslaving em2 as a backup 
interface with a down link
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Adding slave em3
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Enslaving em3 as a backup 
interface with a down link
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Adding slave em4
Apr 20 10:51:10 vm-host-colo-2 kernel: bond0: Enslaving em4 as a backup 
interface with a down link
Apr 20 10:51:10 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_UP): bond0: link 
is not ready
Apr 20 10:51:10 vm-host-colo-2 kernel: 8021q: adding VLAN 0 to HW filter on 
device bond0
Apr 20 10:51:10 vm-host-colo-2 kernel: device bond0 entered promiscuous mode
Apr 20 10:51:10 vm-host-colo-2 kernel: device em1 entered promiscuous mode
Apr 20 10:51:10 vm-host-colo-2 kernel: device em2 entered promiscuous mode
Apr 20 10:51:10 vm-host-colo-2 kernel: device em3 entered promiscuous mode
Apr 20 10:51:10 vm-host-colo-2 kernel: device em4 entered promiscuous mode
Apr 20 10:51:10 vm-host-colo-2 network: Bringing up interface bond0:  [  OK  ]
Apr 20 10:51:11 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_UP): bond0.10: 
link is not ready
Apr 20 10:51:11 vm-host-colo-2 kernel: device bond0.10 entered promiscuous mode
Apr 20 10:51:11 vm-host-colo-2 network: Bringing up interface bond0.10:  [  OK  
]
Apr 20 10:51:11 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_UP): bond0.102: 
link is not ready
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:01:00.0 em1: Link is up at 1000 
Mbps, full duplex
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:01:00.0 em1: Flow control is on 
for TX and on for RX
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:01:00.0 em1: EEE is disabled
Apr 20 10:51:13 vm-host-colo-2 kernel: bond0: link status definitely up for 
interface em1, 1000 Mbps full duplex
Apr 20 10:51:13 vm-host-colo-2 kernel: bond0: Warning: No 802.3ad response from 
the link partner for any adapters in the bond
Apr 20 10:51:13 vm-host-colo-2 kernel: bond0: first active interface up!
Apr 20 10:51:13 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: 
link becomes ready
Apr 20 10:51:13 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0.10: 
link becomes ready
Apr 20 10:51:13 vm-host-colo-2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 
bond0.102: link becomes ready
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:01:00.1 em2: Link is up at 1000 
Mbps, full duplex
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:01:00.1 em2: Flow control is on 
for TX and on for RX
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:01:00.1 em2: EEE is disabled
Apr 20 10:51:13 vm-host-colo-2 kernel: bond0: link status definitely up for 
interface em2, 1000 Mbps full duplex
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:02:00.0 em3: Link is up at 1000 
Mbps, full duplex
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:02:00.0 em3: Flow control is on 
for TX and on for RX
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:02:00.0 em3: EEE is disabled
Apr 20 10:51:13 vm-host-colo-2 kernel: bond0: link status definitely up for 
interface em3, 1000 Mbps full duplex
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:02:00.1 em4: Link is up at 1000 
Mbps, full duplex
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:02:00.1 em4: Flow control is on 
for TX and on for RX
Apr 20 10:51:13 vm-host-colo-2 kernel: tg3 0000:02:00.1 em4: EEE is disabled
Apr 20 10:51:13 vm-host-colo-2 kernel: bond0: link status definitely up for 
interface em4, 1000 Mbps full duplex
Apr 20 10:51:15 vm-host-colo-2 network: Bringing up interface bond0.102:  [  OK 
 ]
Apr 20 10:51:15 vm-host-colo-2 kernel: device bond0.20 entered promiscuous mode
Apr 20 10:51:15 vm-host-colo-2 network: Bringing up interface bond0.20:  [  OK  
]
Apr 20 10:51:15 vm-host-colo-2 kernel: device bond0.30 entered promiscuous mode
Apr 20 10:51:15 vm-host-colo-2 network: Bringing up interface bond0.30:  [  OK  
]
Apr 20 10:51:15 vm-host-colo-2 kernel: device bond0.40 entered promiscuous mode
Apr 20 10:51:15 vm-host-colo-2 network: Bringing up interface bond0.40:  [  OK  
]
Apr 20 10:51:15 vm-host-colo-2 kernel: device bond0.50 entered promiscuous mode
Apr 20 10:51:15 vm-host-colo-2 network: Bringing up interface bond0.50:  [  OK  
]
Apr 20 10:51:15 vm-host-colo-2 kernel: DMZ: port 1(bond0.10) entered forwarding 
state
Apr 20 10:51:15 vm-host-colo-2 kernel: DMZ: port 1(bond0.10) entered forwarding 
state
Apr 20 10:51:16 vm-host-colo-2 network: Bringing up interface DMZ:  [  OK  ]
Apr 20 10:51:16 vm-host-colo-2 kernel: Internal-Dev: port 1(bond0.30) entered 
forwarding state
Apr 20 10:51:16 vm-host-colo-2 kernel: Internal-Dev: port 1(bond0.30) entered 
forwarding state
Apr 20 10:51:16 vm-host-colo-2 network: Bringing up interface Internal-Dev:  [  
OK  ]
Apr 20 10:51:16 vm-host-colo-2 kernel: Lab: port 1(bond0.40) entered forwarding 
state
Apr 20 10:51:16 vm-host-colo-2 kernel: Lab: port 1(bond0.40) entered forwarding 
state
Apr 20 10:51:16 vm-host-colo-2 network: Bringing up interface Lab:  [  OK  ]
Apr 20 10:51:16 vm-host-colo-2 kernel: Server-Net: port 1(bond0.20) entered 
forwarding state
Apr 20 10:51:16 vm-host-colo-2 kernel: Server-Net: port 1(bond0.20) entered 
forwarding state
Apr 20 10:51:16 vm-host-colo-2 network: Bringing up interface Server-Net:  [  
OK  ]
Apr 20 10:51:16 vm-host-colo-2 kernel: Workstation: port 1(bond0.50) entered 
forwarding state
Apr 20 10:51:16 vm-host-colo-2 kernel: Workstation: port 1(bond0.50) entered 
forwarding state
Apr 20 10:51:16 vm-host-colo-2 network: Bringing up interface Workstation:  [  
OK  ]
Apr 20 10:51:17 vm-host-colo-2 kernel: ovirtmgmt: port 1(bond0) entered 
forwarding state
Apr 20 10:51:17 vm-host-colo-2 kernel: ovirtmgmt: port 1(bond0) entered 
forwarding state
Apr 20 10:51:21 vm-host-colo-2 network: Bringing up interface ovirtmgmt:  [  OK 
 ]
Apr 20 10:51:21 vm-host-colo-2 systemd: Started LSB: Bring up/down networking.
-- ifcfg-bond0
# Generated by VDSM version 4.19.10.1-1.el7.centos
DEVICE=bond0
BONDING_OPTS='mode=4 miimon=100 xmit_hash_policy=2'
BRIDGE=ovirtmgmt
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no


-- ifcfg-em1
# Generated by VDSM version 4.19.10.1-1.el7.centos
DEVICE=em1
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no


--ifcfg-em2
# Generated by VDSM version 4.19.10.1-1.el7.centos
DEVICE=em2
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no

--ifcfg-em3
# Generated by VDSM version 4.19.10.1-1.el7.centos
DEVICE=em3
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no

--ifcfg-em4
# Generated by VDSM version 4.19.10.1-1.el7.centos
DEVICE=em4
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to