Well, I finally got it to work -- it turns out that I needed to disable the
Spanning Tree Protocol feature of the switch as the "ARP magic trick" of alb
does not work well with STP.
--- mike t.
From: fooler mail <[email protected]>
To: Michael Tinsay <[email protected]>; Philippine Linux Users' Group (PLUG)
Technical Discussion List <[email protected]>
Sent: Sunday, 29 March 2015, 9:59
Subject: Re: [plug] balance-alb question
hi mike,
160MBps x 8 bits is 1.28Gbps which is more than to your 1Gbps network
card... remember your speed is only as fast as its slowest link..
i duplicate your network setup in my laptop running enterprise windows
8 (i7, 32gb ram, ssd) as host OS and using virtualbox as
virtualization software and redhat as guest OS... the following setup:
network segment: 192.168.255.128/25
server aka serverbond5 (192.168.255.254/25):
[root@serverbond5 ~]# ifconfig bond0 | grep -B 2 "inet addr"
bond0 Link encap:Ethernet HWaddr 08:00:27:09:98:FB
inet addr:192.168.255.254 Bcast:192.168.255.255 Mask:255.255.255.128
[root@serverbond5 ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:09:98:fb
Slave queue ID: 0
Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:83:44:19
Slave queue ID: 0
client1 aka clientbond6 (192.168.255.129/25):
[root@clientbond6 ~]# ifconfig eth1 | grep -B 2 "inet addr"
eth1 Link encap:Ethernet HWaddr 08:00:27:88:37:C8
inet addr:192.168.255.129 Bcast:192.168.255.255 Mask:255.255.255.128
client2 aka clientbond7 (192.168.255.130/25)
[root@clientbond7 ~]# ifconfig eth1 | grep -B 2 "inet addr"
eth1 Link encap:Ethernet HWaddr 08:00:27:FE:6A:31
inet addr:192.168.255.130 Bcast:192.168.255.255 Mask:255.255.255.128
now let us see how bond mode 6 or alb works for *receive* load balancing...
pinging from client1 to server...
client1:
[root@clientbond6 ~]# ping -c 1 192.168.255.254
PING 192.168.255.254 (192.168.255.254) 56(84) bytes of data.
64 bytes from 192.168.255.254: icmp_seq=1 ttl=64 time=1.50 ms
server listening at eth1:
[root@serverbond5 ~]# tcpdump -i eth1 -nn icmp
tcpdump: WARNING: eth1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
20:45:04.055800 IP 192.168.255.129 > 192.168.255.254: ICMP echo
request, id 30724, seq 1, length 64
20:45:04.055822 IP 192.168.255.254 > 192.168.255.129: ICMP echo reply,
id 30724, seq 1, length 64
server listening at eth2:
no icmp packets captured
*********************
pinging from client2 to server:
client2:
[root@clientbond7 ~]# ping -c 1 192.168.255.254
PING 192.168.255.254 (192.168.255.254) 56(84) bytes of data.
64 bytes from 192.168.255.254: icmp_seq=1 ttl=64 time=40.1 ms
server listening at eth1:
no icmp packets captured
server listening at eth2:
[root@serverbond5 ~]# tcpdump -i eth2 -nn icmp
tcpdump: WARNING: eth2: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
20:52:36.973335 IP 192.168.255.130 > 192.168.255.254: ICMP echo
request, id 52227, seq 1, length 64
20:52:36.973356 IP 192.168.255.254 > 192.168.255.130: ICMP echo reply,
id 52227, seq 1, length 64
at this point you see that client1 packets goes to eth1 and client2
packets goes to eth2 ...
now let us try again to repeat the process if the results are still the same..
pinging from client1 to server:
[root@clientbond6 ~]# ping -c 1 192.168.255.254
PING 192.168.255.254 (192.168.255.254) 56(84) bytes of data.
64 bytes from 192.168.255.254: icmp_seq=1 ttl=64 time=0.278 ms
from server listening at eth1:
[root@serverbond5 ~]# tcpdump -i eth1 -nn icmp
tcpdump: WARNING: eth1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
21:04:03.722654 IP 192.168.255.129 > 192.168.255.254: ICMP echo
request, id 62723, seq 1, length 64
from server listening at eth2:
[root@serverbond5 ~]# tcpdump -i eth2 -nn icmp
tcpdump: WARNING: eth2: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
21:04:03.722681 IP 192.168.255.254 > 192.168.255.129: ICMP echo reply,
id 62723, seq 1, length 64
as you can see there.. icmp echo request goes to eth1 and icmp echo
reply goes to eth2...
****************
pinging from client2 to server:
[root@clientbond7 ~]# ping -c 1 192.168.255.254
PING 192.168.255.254 (192.168.255.254) 56(84) bytes of data.
64 bytes from 192.168.255.254: icmp_seq=1 ttl=64 time=0.436 ms
from server listening at eth1:
[root@serverbond5 ~]# tcpdump -i eth1 -nn icmp
tcpdump: WARNING: eth1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
21:30:34.090810 IP 192.168.255.254 > 192.168.255.130: ICMP echo reply,
id 61443, seq 1, length 64
from server listening at eth2:
[root@serverbond5 ~]# tcpdump -i eth2 -nn icmp
tcpdump: WARNING: eth2: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
21:30:34.090787 IP 192.168.255.130 > 192.168.255.254: ICMP echo
request, id 61443, seq 1, length 64
as you can see there.. icmp echo request goest to eth2 and icmp echo
reply goes to eth1
so this means... receive load balancing is balancing perfectly...
did a test on transmit load balancing of mode 6 and it is load
balancing perfectly also.... i was wrong to say regarding to the
hashing of client address but i was right to say both slaves are
active...
to summarize... the fastest speed you can get is the slowest link
between two end points...
fooler.
On Fri, Mar 27, 2015 at 10:49 PM,
> Michael Tinsay <[email protected]> wrote:
> Thanks Holden and Fooler.
>
> The crux of the issue is that I cannot seem to get the supposed "receive
> load balancing" feature of balance-alb working as I expect it -- I'm still
> only getting 100MBps when simultaneous transfers from both clients to the
> server are happening; I was expecting to see at least 160MBps if each
> clients are transmitting to different NIC's in the server's bonded
> interfaces.
>
>
> --- mike t.
>
>
>
> ________________________________
> From: Holden Hao <[email protected]>
> To: Philippine Linux Users' Group (PLUG) Technical Discussion List
> <[email protected]>
> Cc: Michael Tinsay <[email protected]>
> Sent: Friday, 27 March 2015, 11:18
> Subject: Re: [plug] balance-alb question
>
> Mike,
>
> The following links might help you. The first one is a detailed overview of
> link aggregation. It uses round robin mode in the examples but the
> explanations and tools might help. The second one is about copying large
> data between server and host using various FOSS tools.
>
> Speed Up Your Home Network With Link Aggregation in Linux Mint 17 and
> Xubuntu 14.04
> https://delightlylinux.wordpress.com/2014/07/12/speed-up-your-home-network-with-link-aggregation-in-linux-mint-17-and-xubuntu-14-04/
>
> How to transfer large amounts of data via network
> http://moo.nac.uci.edu/~hjm/HOWTO_move_data.html
>
> HTH,
>
>
> Holden
>
>
>
> On Fri, Mar 27, 2015 at 8:21 AM, fooler mail <[email protected]> wrote:
>
> dont forget also to set your swtich mtu to 9000 as path mtu discovery
> (PMTUD) will choose the lowest mtu between the path of two hosts...
>
> fooler.
>
> On Thu, Mar 26, 2015 at 6:53 AM, fooler mail <[email protected]> wrote:
>> if i recall correctly.. alb slaves are both active and the one showing
>> is just the current active... alb is using hashing to determined which
>> slave interface should be use for this client.. hash value is based on
>> client's address.. if both clients hash value are on the same index..
>> then it will be using the same slave... but read the bond alb source
>> code to know the details...
>>
>> for the meantime.. check your mtu if it sets to jumbo frames (eg. mtu
>> 9000 ) as you can't saturate your gigabit card if you are only using
>> mtu 1500...
>>
>> fooler.
>>
>> On Tue, Mar 24, 2015 at 4:34 AM, Michael Tinsay <[email protected]>
>> wrote:
>>> Hello PLUGgers!
>>>
>>>
>>> I have a small setup here in the office to test NIC bonding performance,
>>> specifically balance-alb or mode 6. So I have the ff. machines:
>>>
>>> 1) SERVER: a desktop PC with two LAN ports (the mobo-built-in one plus a
>>> PCI-card one, both are Realtek chipset using the r8169 driver)
>>>
>>> 2) CLIENT-A: a desktop PC with just 1 LAN port - the built-in one.
>>>
>>> 3) CLIENT-B: a laptop with a built-in LAN port.
>>>
>>> 4) An 8-port unmanaged Cisco/Linksys switch.
>>>
>>> All LAN ports in all machines are capable of 1GigE.
>>>
>>>
>>> My goal is to see if the following will hold true:
>>>
>>> a) Simultaneous transfer of a 32GB-sized file from both CLIENT-A and
>>> CLIENT-B to SERVER and getting a sustained transfer rate of 100MBps or so
>>> per client.
>>>
>>> b) Simultaneous transfer of a 32GB-sized file from SERVER to both clients
>>> and getting a sustained transfer rate of 100MBps or so per client.
>>>
>>>
>>> However, I can't seem to achieve either of the goal.
>>>
>>> After a lot of googling around, "cat /proc/net/bonding/bond0" shows that
>>> while both ethernet ports are slaved, only one is active.
>>>
>>> How do I make both slaves active? My Google Fu is failing me on this.
>>>
>>>
>>> --- mike t.
>>>
>>>
>>>
>>>
>>>
>>> _________________________________________________
>>> Philippine Linux Users' Group (PLUG) Mailing List
>>> http://lists.linux.org.ph/mailman/listinfo/plug
>>> Searchable Archives: http://archives.free.net.ph
> _________________________________________________
> Philippine Linux Users' Group (PLUG) Mailing List
> http://lists.linux.org.ph/mailman/listinfo/plug
> Searchable Archives: http://archives.free.net.ph
>
>
>
> _________________________________________________
> Philippine Linux Users' Group (PLUG) Mailing List
> http://lists.linux.org.ph/mailman/listinfo/plug
> Searchable Archives: http://archives.free.net.ph
>
>
>
> _________________________________________________
> Philippine Linux Users' Group (PLUG) Mailing List
> http://lists.linux.org.ph/mailman/listinfo/plug
> Searchable Archives: http://archives.free.net.ph
_________________________________________________
Philippine Linux Users' Group (PLUG) Mailing List
http://lists.linux.org.ph/mailman/listinfo/plug
Searchable Archives: http://archives.free.net.ph
_________________________________________________
Philippine Linux Users' Group (PLUG) Mailing List
http://lists.linux.org.ph/mailman/listinfo/plug
Searchable Archives: http://archives.free.net.ph