Hi all, I need to get tcp traffic aggregation between 4 hosts connected to same Cisco switch over 1Gbe ports (2 ports per server).
My problem is I can't get "inbound" traffic balanced over both eth0 and eth1 On all servers I'm using bond-ed interface with 2x 1Gbe slave interfaces, want to get 2Gbe aggregated link for both inbound and outbound traffic. I do have VLAN interface which is using bond0 - "vlan-raw-device bond0". On switch (Cisco 2960) I have portchannel with active mode configured. Cisco configuration looks like this: vlan 20 interface GigabitEthernet0/19 switchport trunk encapsulation dot1q switchport mode trunk channel-group 1 mode active interface GigabitEthernet0/20 switchport trunk encapsulation dot1q switchport mode trunk channel-group 1 mode active interface Port-channel1 switchport trunk encapsulation dot1q switchport mode trunk On Linux hosts I'm using bond-mode 802.3ad, bond-lacp-rate 1 and bond-xmit-hash-policy encap3+4 (also tried with bond-xmit-hash-policy layer3+4). I'm running iperf on this server in server mode (iperf -s -p 5001, iperf -s -p 5002) where on 3 other clients. I'm running iperf in client mode, with multiple theads -P and using both tcp ports for connection 5001 and 5002 to make sure that layer/encap3+4 load balancing (3+4) work. My problem is that with ifstat I see that: a) on iperf "server" only one eth interface is uses (and saturated as result); b) on iperf "clients" I see traffic is leaving over both interfaces (balanced). I'm wondered what's missing there, why inbound traffic is not coming over both interfaces, but one. There are my configuration and and tests results between 2 of hosts (ceph0 and nebula0). root@ceph0:~# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: encap3+4 (4) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 200 Down Delay (ms): 200 802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): bandwidth Active Aggregator Info: Aggregator ID: 15 Number of ports: 2 Actor Key: 17 Partner Key: 4 Partner Mac Address: 5c:fc:66:d4:6d:80 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 78:e3:b5:13:28:ac Aggregator ID: 15 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 78:e3:b5:13:28:ae Aggregator ID: 15 Slave queue ID: 0 root@ceph0:~# cat /etc/network/interfaces auto lo bond0 vlan20 eth0 eth1 iface lo inet loopback iface bond0 inet static address 10.10.10.70 netmask 255.255.255.0 slaves eth0 eth1 bond-mode 802.3ad bond-min-links 1 bond-miimon 100 bond-lacp-rate 1 bond-xmit-hash-policy encap3+4 iface vlan20 inet static address 10.0.10.70 netmask 255.255.255.0 vlan-raw-device bond0 root@ceph0:~# ifstat -i bond0 -i eth0 -i eth1 root@nebula0:~# ifstat -i bond0 -i eth0 -i eth1 bond0 eth0 eth1 KB/s in KB/s out KB/s in KB/s out KB/s in KB/s out 120293.0 860.53 120291.5 333.78 1.58 526.75 120292.5 883.70 120292.3 352.81 0.13 530.89 120292.1 856.83 120292.0 326.76 0.15 530.07 120295.4 862.24 120294.0 354.49 1.37 507.75 120291.4 859.61 120291.4 317.20 0.07 542.41 120293.2 856.05 120293.0 309.97 0.21 546.07 120293.8 854.75 120293.3 330.57 0.50 524.18 120293.4 855.36 120293.3 308.17 0.14 547.19 120784.1 857.80 120783.9 313.48 0.19 544.32 root@nebula0:~# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: encap3+4 (4) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: fast Min links: 0 Aggregator selection policy (ad_select): bandwidth System priority: 65535 System MAC address: 2c:59:e5:42:5e:24 Active Aggregator Info: Aggregator ID: 8 Number of ports: 2 Actor Key: 9 Partner Key: 2 Partner Mac Address: 5c:fc:66:d4:6d:80 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 2c:59:e5:42:5e:24 Slave queue ID: 0 Aggregator ID: 8 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 2c:59:e5:42:5e:24 port key: 9 port priority: 255 port number: 1 port state: 63 details partner lacp pdu: system priority: 32768 system mac address: 5c:fc:66:d4:6d:80 oper key: 2 port priority: 32768 port number: 262 port state: 60 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 2c:59:e5:42:5e:26 Slave queue ID: 0 Aggregator ID: 8 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 2c:59:e5:42:5e:24 port key: 9 port priority: 255 port number: 2 port state: 63 details partner lacp pdu: system priority: 32768 system mac address: 5c:fc:66:d4:6d:80 oper key: 2 port priority: 32768 port number: 263 port state: 60 cat /etc/network/interfaces auto lo bond0 vlan20 iface lo inet loopback iface bond0 inet manual slaves eth0 eth1 bond-mode 802.3ad bond-miimon 100 bond-min-links 1 bond-lacp-rate 1 bond-xmit-hash-policy encap3+4 iface vlan20 inet static address 10.0.10.60 netmask 255.255.255.0 vlan-raw-device bond0 When using iperf -c (two instances) from single host and sending to two different hosts I see 2Gbe (outbound): root@ceph0:~# ifstat -i bond0 -i eth0 -i eth1 bond0 eth0 eth1 KB/s in KB/s out KB/s in KB/s out KB/s in KB/s out 3291.83 241066.0 1.17 120287.4 3290.66 120778.7 3267.00 240571.6 22.71 120286.5 3244.29 120285.1 3261.41 240571.4 1.15 120284.7 3260.25 120286.7 3247.47 240573.1 2.81 120286.3 3244.66 120286.8 3281.79 240572.5 2.61 120286.3 3279.18 120286.2 3280.15 241064.7 2.51 120778.8 3277.64 120286.0 3237.91 240570.1 3.09 120284.9 3234.81 120285.2 Thank you all for help! -- With regards, Evgeniy