Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-07-28 Thread Marcelo Araujo


 
  2014-07-19 2:18 GMT+08:00 Navdeep Parhar npar...@gmail.com
  mailto:npar...@gmail.com:
 
  On 07/18/14 00:49, Marcelo Araujo wrote:
   Hello guys,
  
   I made few changes on the lagg(4) patch. Also, I made tests using
  igb(4),
   ixgbe(4) and em(4); seems everything worked pretty well.
  
   I'm wondering if anyone else could make a review, and what I need
  to do, to
   see this patch committed.
 
  Deliberately putting out-of-order packets on the wire is never a good
  idea.  This would count as a serious regression in lagg(4) imho.
 
  Regards,
  Navdeep
 
 
 
  I'm wondering if anyone have tested the patch; because as I have
  explained in another email, the number of SACK is much less with this
  patch. I have put some pcap files
  here: http://people.freebsd.org/~araujo/lagg/
 
  Also, as far as I know, the current roundrobin implementation has no
  such kind of mechanism to control the order of the packages that goes to
  the wire. And this patch, what it only does is, instead to send only one
  package through one interface and switch to the another one, it will
  send X(where X is the number of packets defined via sysctl) packets and
  then, switch to the next interface.
 
  So, could you show me, where this patch deliberately put out-of-order
  packets? Did I miss anything?


Hey np@



 Are you saying lagg's roundrobin implementation is already spraying
 packets for the same flow across interfaces?


Yes it does, if you check the SACK counter you can see that it does out of
order by itself, with the patch or without the patch. The only thing that
this patch helps is, send more packets throughout an interface before
switch to the next one, and we will end up with less SACK and a better
throughput, and also we can make a fine tuning.


 That would make it
 unsuitable for anything TCP.


This is something that everybody knows, it breaks TCP by itself, I mean,
performance will drop.


 But then your patch isn't making it any
 worse so I don't have any objection to it any more.


Thank you so much, and sorry by my late reply, I got busy testing other
things.



 Looks like loadbalance does the right thing for flows.


Yes, loadbalance has no issue, it is mainly on round robin.

Best Regards,


-- 
Marcelo Araujo(__)ara...@freebsd.org
\\\'',)http://www.FreeBSD.org http://www.freebsd.org/   \/  \ ^
Power To Server. .\. /_)
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-07-18 Thread Marcelo Araujo
Hello guys,

I made few changes on the lagg(4) patch. Also, I made tests using igb(4),
ixgbe(4) and em(4); seems everything worked pretty well.

I'm wondering if anyone else could make a review, and what I need to do, to
see this patch committed.

Best Regards,




2014-06-24 10:40 GMT+08:00 Marcelo Araujo araujobsdp...@gmail.com:



 2014-06-24 6:54 GMT+08:00 Adrian Chadd adr...@freebsd.org:

 Hi,

 No, don't introduce out of order behaviour. Ever.


 Yes, it has out of order behavior; with my patch much less. I upload two
 pcap files and you can see by yourself, if you don't believe in what I'm
 talking about.

 Test done using: iperf -s and iperf -c ip -i 1 -t 10.

 1) Don't change the number of packets(default round robin behavior).
 http://people.freebsd.org/~araujo/lagg/lagg-nop.cap
 8 out of order packets.
 Several SACKs.

 2) Set the number of packets to 50.
 http://people.freebsd.org/~araujo/lagg/lagg.cap
 0 out of order packets.
 Less SACKs.


 You may not think
 it's a problem for TCP, but UDP things and VPN things will start
 getting very angry. There are VPN configurations out there that will
 drop the VPN if frames are out of order.


 I'm not thinking that will be a problem for TCP, but, in somehow it will
 be, less throughput as I showed before, and less SACK. About the VPN,
 please, tell me which softwares, and let me know where I can get a sample
 to make a testbed.

 However to be very honest, I don't believe anyone here when change
 something at network protocols will make this extensive testbed. It is
 almost impossible to predict what software it will works or not, and I
 don't believe anyone here has all these stuff in hands.



 The ixgbe driver is setting the flowid to the msix queue ID, rather
 than a 32 bit unique flow id hash value for the flow. That makes it
 hard to do traffic distribution where the flowid is available.


 Thanks for the explanation.



 There's an lagg option to re-hash the mbuf rather than rely on the
 flowid for outbound port choice - have you looked at using that? Did
 that make any difference?


 Yes, I set to 0 the net.link.lagg.0.use _flowid, it make a little
 difference to the default round robin implementation, but yet I can't reach
 more than 5 Gbit/s. With my patch and set the packets to 50, it improved a
 bit too.

 So, thank you so much for all review, I don't know if you have time and a
 testbed to make a real test, as I'm doing. I would be happy if you or more
 people could make tests on that patch. Also, I have only ixgbe(4) to make
 tests, would appreciate if this patch could be tested with other NICs too.

 Best Regards,

 --
 Marcelo Araujo(__)
 ara...@freebsd.org \\\'',)http://www.FreeBSD.org 
 http://www.freebsd.org/   \/  \ ^
 Power To Server. .\. /_)




-- 

-- 
Marcelo Araujo(__)ara...@freebsd.org
\\\'',)http://www.FreeBSD.org http://www.freebsd.org/   \/  \ ^
Power To Server. .\. /_)


if_lagg-rr.patch
Description: Binary data
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org

Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-07-18 Thread Adrian Chadd
Hi,

I strongly object to having a round-robin method like this. Yes, we
won't get  1 link of bandwidth out of a single stream, but you're
showing that you can't even get that. There's still something else
weird going on.

I'm sorry, but introducing more out of order possibilities is being a
bad network citizen.



-a


On 18 July 2014 00:49, Marcelo Araujo araujobsdp...@gmail.com wrote:
 Hello guys,

 I made few changes on the lagg(4) patch. Also, I made tests using igb(4),
 ixgbe(4) and em(4); seems everything worked pretty well.

 I'm wondering if anyone else could make a review, and what I need to do, to
 see this patch committed.

 Best Regards,




 2014-06-24 10:40 GMT+08:00 Marcelo Araujo araujobsdp...@gmail.com:



 2014-06-24 6:54 GMT+08:00 Adrian Chadd adr...@freebsd.org:

 Hi,

 No, don't introduce out of order behaviour. Ever.


 Yes, it has out of order behavior; with my patch much less. I upload two
 pcap files and you can see by yourself, if you don't believe in what I'm
 talking about.

 Test done using: iperf -s and iperf -c ip -i 1 -t 10.

 1) Don't change the number of packets(default round robin behavior).
 http://people.freebsd.org/~araujo/lagg/lagg-nop.cap
 8 out of order packets.
 Several SACKs.

 2) Set the number of packets to 50.
 http://people.freebsd.org/~araujo/lagg/lagg.cap
 0 out of order packets.
 Less SACKs.


 You may not think
 it's a problem for TCP, but UDP things and VPN things will start
 getting very angry. There are VPN configurations out there that will
 drop the VPN if frames are out of order.


 I'm not thinking that will be a problem for TCP, but, in somehow it will
 be, less throughput as I showed before, and less SACK. About the VPN,
 please, tell me which softwares, and let me know where I can get a sample to
 make a testbed.

 However to be very honest, I don't believe anyone here when change
 something at network protocols will make this extensive testbed. It is
 almost impossible to predict what software it will works or not, and I don't
 believe anyone here has all these stuff in hands.



 The ixgbe driver is setting the flowid to the msix queue ID, rather
 than a 32 bit unique flow id hash value for the flow. That makes it
 hard to do traffic distribution where the flowid is available.


 Thanks for the explanation.



 There's an lagg option to re-hash the mbuf rather than rely on the
 flowid for outbound port choice - have you looked at using that? Did
 that make any difference?


 Yes, I set to 0 the net.link.lagg.0.use _flowid, it make a little
 difference to the default round robin implementation, but yet I can't reach
 more than 5 Gbit/s. With my patch and set the packets to 50, it improved a
 bit too.

 So, thank you so much for all review, I don't know if you have time and a
 testbed to make a real test, as I'm doing. I would be happy if you or more
 people could make tests on that patch. Also, I have only ixgbe(4) to make
 tests, would appreciate if this patch could be tested with other NICs too.

 Best Regards,

 --
 Marcelo Araujo(__)
 ara...@freebsd.org \\\'',)
 http://www.FreeBSD.org   \/  \ ^
 Power To Server. .\. /_)




 --

 --
 Marcelo Araujo(__)
 ara...@freebsd.org \\\'',)
 http://www.FreeBSD.org   \/  \ ^
 Power To Server. .\. /_)
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-07-18 Thread Navdeep Parhar
On 07/18/14 00:49, Marcelo Araujo wrote:
 Hello guys,
 
 I made few changes on the lagg(4) patch. Also, I made tests using igb(4),
 ixgbe(4) and em(4); seems everything worked pretty well.
 
 I'm wondering if anyone else could make a review, and what I need to do, to
 see this patch committed.

Deliberately putting out-of-order packets on the wire is never a good
idea.  This would count as a serious regression in lagg(4) imho.

Regards,
Navdeep


 
 Best Regards,
 
 
 
 
 2014-06-24 10:40 GMT+08:00 Marcelo Araujo araujobsdp...@gmail.com:
 


 2014-06-24 6:54 GMT+08:00 Adrian Chadd adr...@freebsd.org:

 Hi,

 No, don't introduce out of order behaviour. Ever.


 Yes, it has out of order behavior; with my patch much less. I upload two
 pcap files and you can see by yourself, if you don't believe in what I'm
 talking about.

 Test done using: iperf -s and iperf -c ip -i 1 -t 10.

 1) Don't change the number of packets(default round robin behavior).
 http://people.freebsd.org/~araujo/lagg/lagg-nop.cap
 8 out of order packets.
 Several SACKs.

 2) Set the number of packets to 50.
 http://people.freebsd.org/~araujo/lagg/lagg.cap
 0 out of order packets.
 Less SACKs.


 You may not think
 it's a problem for TCP, but UDP things and VPN things will start
 getting very angry. There are VPN configurations out there that will
 drop the VPN if frames are out of order.


 I'm not thinking that will be a problem for TCP, but, in somehow it will
 be, less throughput as I showed before, and less SACK. About the VPN,
 please, tell me which softwares, and let me know where I can get a sample
 to make a testbed.

 However to be very honest, I don't believe anyone here when change
 something at network protocols will make this extensive testbed. It is
 almost impossible to predict what software it will works or not, and I
 don't believe anyone here has all these stuff in hands.



 The ixgbe driver is setting the flowid to the msix queue ID, rather
 than a 32 bit unique flow id hash value for the flow. That makes it
 hard to do traffic distribution where the flowid is available.


 Thanks for the explanation.



 There's an lagg option to re-hash the mbuf rather than rely on the
 flowid for outbound port choice - have you looked at using that? Did
 that make any difference?


 Yes, I set to 0 the net.link.lagg.0.use _flowid, it make a little
 difference to the default round robin implementation, but yet I can't reach
 more than 5 Gbit/s. With my patch and set the packets to 50, it improved a
 bit too.

 So, thank you so much for all review, I don't know if you have time and a
 testbed to make a real test, as I'm doing. I would be happy if you or more
 people could make tests on that patch. Also, I have only ixgbe(4) to make
 tests, would appreciate if this patch could be tested with other NICs too.

 Best Regards,

 --
 Marcelo Araujo(__)
 ara...@freebsd.org \\\'',)http://www.FreeBSD.org 
 http://www.freebsd.org/   \/  \ ^
 Power To Server. .\. /_)


 
 
 
 
 ___
 freebsd-net@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-net
 To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
 

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-07-18 Thread Marcelo Araujo
2014-07-19 2:18 GMT+08:00 Navdeep Parhar npar...@gmail.com:

 On 07/18/14 00:49, Marcelo Araujo wrote:
  Hello guys,
 
  I made few changes on the lagg(4) patch. Also, I made tests using igb(4),
  ixgbe(4) and em(4); seems everything worked pretty well.
 
  I'm wondering if anyone else could make a review, and what I need to do,
 to
  see this patch committed.

 Deliberately putting out-of-order packets on the wire is never a good
 idea.  This would count as a serious regression in lagg(4) imho.

 Regards,
 Navdeep



I'm wondering if anyone have tested the patch; because as I have explained
in another email, the number of SACK is much less with this patch. I have
put some pcap files here: http://people.freebsd.org/~araujo/lagg/

Also, as far as I know, the current roundrobin implementation has no such
kind of mechanism to control the order of the packages that goes to the
wire. And this patch, what it only does is, instead to send only one
package through one interface and switch to the another one, it will send
X(where X is the number of packets defined via sysctl) packets and then,
switch to the next interface.

So, could you show me, where this patch deliberately put out-of-order
packets? Did I miss anything?


Best Regards,
-- 

-- 
Marcelo Araujo(__)ara...@freebsd.org
\\\'',)http://www.FreeBSD.org http://www.freebsd.org/   \/  \ ^
Power To Server. .\. /_)
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-07-18 Thread Navdeep Parhar
On 07/18/14 19:06, Marcelo Araujo wrote:
 
 
 
 2014-07-19 2:18 GMT+08:00 Navdeep Parhar npar...@gmail.com
 mailto:npar...@gmail.com:
 
 On 07/18/14 00:49, Marcelo Araujo wrote:
  Hello guys,
 
  I made few changes on the lagg(4) patch. Also, I made tests using
 igb(4),
  ixgbe(4) and em(4); seems everything worked pretty well.
 
  I'm wondering if anyone else could make a review, and what I need
 to do, to
  see this patch committed.
 
 Deliberately putting out-of-order packets on the wire is never a good
 idea.  This would count as a serious regression in lagg(4) imho.
 
 Regards,
 Navdeep
 
 
 
 I'm wondering if anyone have tested the patch; because as I have
 explained in another email, the number of SACK is much less with this
 patch. I have put some pcap files
 here: http://people.freebsd.org/~araujo/lagg/
 
 Also, as far as I know, the current roundrobin implementation has no
 such kind of mechanism to control the order of the packages that goes to
 the wire. And this patch, what it only does is, instead to send only one
 package through one interface and switch to the another one, it will
 send X(where X is the number of packets defined via sysctl) packets and
 then, switch to the next interface.
 
 So, could you show me, where this patch deliberately put out-of-order
 packets? Did I miss anything?

Are you saying lagg's roundrobin implementation is already spraying
packets for the same flow across interfaces?  That would make it
unsuitable for anything TCP.  But then your patch isn't making it any
worse so I don't have any objection to it any more.

Looks like loadbalance does the right thing for flows.

Regards,
Navdeep
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-07-18 Thread Adrian Chadd
On 18 July 2014 19:06, Marcelo Araujo araujobsdp...@gmail.com wrote:



 2014-07-19 2:18 GMT+08:00 Navdeep Parhar npar...@gmail.com:

 On 07/18/14 00:49, Marcelo Araujo wrote:
  Hello guys,
 
  I made few changes on the lagg(4) patch. Also, I made tests using
  igb(4),
  ixgbe(4) and em(4); seems everything worked pretty well.
 
  I'm wondering if anyone else could make a review, and what I need to do,
  to
  see this patch committed.

 Deliberately putting out-of-order packets on the wire is never a good
 idea.  This would count as a serious regression in lagg(4) imho.

 Regards,
 Navdeep



 I'm wondering if anyone have tested the patch; because as I have explained
 in another email, the number of SACK is much less with this patch. I have
 put some pcap files here: http://people.freebsd.org/~araujo/lagg/

 Also, as far as I know, the current roundrobin implementation has no such
 kind of mechanism to control the order of the packages that goes to the
 wire. And this patch, what it only does is, instead to send only one package
 through one interface and switch to the another one, it will send X(where X
 is the number of packets defined via sysctl) packets and then, switch to the
 next interface.

 So, could you show me, where this patch deliberately put out-of-order
 packets? Did I miss anything?

It doesn't introduce it, but it still continues potentially out of
order behaviour depending upon CPU loading and NIC scheduling.

If you're seeing reduced ACK / retransmits by doing this then there's
gotta be some other underlying factor causing it. That's what I think
needs to be fixed, not papering over it by more round robin hacks. :-P




-a
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-06-23 Thread Marcelo Araujo
Hello Adrian,


2014-06-23 12:16 GMT+08:00 Adrian Chadd adr...@freebsd.org:

 ...

 It's an interesting idea, but doing round robin like that may
 introduce out of order packets.


Actually, the round robin implementation as it is, causes out of order
packets, but almost all the time SACK can recover it.

In my tests using iperf, when we set a bigger number of packets to be sent
through the same interface before switch to the next one, I can see that we
have less SACK request, and I do believe because of it, I can reach a
better throughput.

The test is very simple: iperf -s and iperf -c ip -i 1 -t 10.

As an example:
1) without change the number of packets:
43 SACK recovery episodes
187 segment rexmits in SACK recovery episodes
270776 byte rexmits in SACK recovery episodes
172688 SACK options (SACK blocks) received
0 SACK options (SACK blocks) sent
0 SACK scoreboard overflow
0 input SACK chunks
0 output SACKs

2) Set 50 packets per interface:
6 SACK recovery episodes
16 segment rexmits in SACK recovery episodes
23168 byte rexmits in SACK recovery episodes
111626 SACK options (SACK blocks) received
0 SACK options (SACK blocks) sent
0 SACK scoreboard overflow
0 input SACK chunks
0 output SACKs




 What's the actual problem you're seeing? Are the transmit queues
 filling up? Is the distribution with flowid/curcpu not good enough?


I have had imported Scott's patch, I do believe you are talking about
r260070. I didn't pay attention to the flowid/curcpu distribution and I
can't tell you if it is the root cause or not, but for my case, it didn't
solve the bad performance of round robin. With all the other lagg(4)
protocols, the throughput reach the limit of the NIC.

It might be likely that the transmit queue isn't filled up or hang for some
reason, it is something that I need check.

My suspicious is how the ixgbe(4) trigger the TSO, it seems that transmit
queue is not completely filled up and it might delay the transmission or
lose packets, or perhaps lose the entire queue. Also any tips of how debug
the TSO will be very welcome.



 Scott saw this happen at Netflix. He added a lagg twiddle to set which
 set of bits to care about in the flowid when picking an interface to
 choose. The ixgbe hashing was being done on the low x bits, where x is
 related to how many CPUs you have (2 CPUs? 1 bit. 8 CPUs? 3 bits.
 etc.) lagg was doing the same thing on the same low order set of bits.
 He modified lagg so you could pick some new starting point a few bits
 up in the flowid to pick a lagg interface with. That fixed the
 distribution issue and also kept the in-orderness of it all.


I thought that Scott's patch is more focused on LACP, I didn't realize that
it would helps the other aggregation protocols. Anyway, for round robin,
with/without the r260070, don't change too much, at least in my environment.

Best Regards,



 2c,


 -a

 On 22 June 2014 19:27, Marcelo Araujo araujobsdp...@gmail.com wrote:
  Hello guys,
 
  I made some changes on roundrobin protocol where from now you can via
  sysctl(8) set a better packets distribution among the interfaces that are
  part of the lagg(4) group.
 
  My motivation for this change was interfaces that use TSO, as example
  ixgbe(4), the performance is terrible, as we can't full fill the TSO
 buffer
  at once, the throughput drops expressively and we have much more sack
  between hosts.
 
  So, with this patch we can set the number of packets that will be send
  before switch to the next interface.
 
  In my testbed using ixgbe(4), I had a very good performance as you can
 see
  bellow:
 
  1) Without patch:
  
  Client connecting to 192.168.1.2, TCP port 5001
  TCP window size: 32.5 KByte (default)
  
  [  3] local 192.168.1.1 port 32808 connected with 192.168.1.2 port 5001
  [ ID] Interval   Transfer Bandwidth
  [  3]  0.0- 1.0 sec   406 MBytes  3.40 Gbits/sec
  [  3]  1.0- 2.0 sec   391 MBytes  3.28 Gbits/sec
  [  3]  2.0- 3.0 sec   406 MBytes  3.41 Gbits/sec
  [  3]  3.0- 4.0 sec   585 MBytes  4.91 Gbits/sec
  [  3]  4.0- 5.0 sec   477 MBytes  4.00 Gbits/sec
  [  3]  5.0- 6.0 sec   429 MBytes  3.60 Gbits/sec
  [  3]  6.0- 7.0 sec   520 MBytes  4.36 Gbits/sec
  [  3]  7.0- 8.0 sec   385 MBytes  3.23 Gbits/sec
  [  3]  8.0- 9.0 sec   414 MBytes  3.48 Gbits/sec
  [  3]  9.0-10.0 sec   515 MBytes  4.32 Gbits/sec
  [  3]  0.0-10.0 sec  4.42 GBytes  3.80 Gbits/sec
 
  2) With patch:
  
  Client connecting to 192.168.1.2, TCP port 5001
  TCP window size: 32.5 KByte (default)
  
  [  3] local 192.168.1.1 port 10526 connected with 192.168.1.2 port 5001
  [ ID] Interval   

Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-06-23 Thread Adrian Chadd
Hi,

No, don't introduce out of order behaviour. Ever. You may not think
it's a problem for TCP, but UDP things and VPN things will start
getting very angry. There are VPN configurations out there that will
drop the VPN if frames are out of order.

The ixgbe driver is setting the flowid to the msix queue ID, rather
than a 32 bit unique flow id hash value for the flow. That makes it
hard to do traffic distribution where the flowid is available.

There's an lagg option to re-hash the mbuf rather than rely on the
flowid for outbound port choice - have you looked at using that? Did
that make any difference?



-a
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-06-23 Thread Marcelo Araujo
2014-06-24 6:54 GMT+08:00 Adrian Chadd adr...@freebsd.org:

 Hi,

 No, don't introduce out of order behaviour. Ever.


Yes, it has out of order behavior; with my patch much less. I upload two
pcap files and you can see by yourself, if you don't believe in what I'm
talking about.

Test done using: iperf -s and iperf -c ip -i 1 -t 10.

1) Don't change the number of packets(default round robin behavior).
http://people.freebsd.org/~araujo/lagg/lagg-nop.cap
8 out of order packets.
Several SACKs.

2) Set the number of packets to 50.
http://people.freebsd.org/~araujo/lagg/lagg.cap
0 out of order packets.
Less SACKs.


 You may not think
 it's a problem for TCP, but UDP things and VPN things will start
 getting very angry. There are VPN configurations out there that will
 drop the VPN if frames are out of order.


I'm not thinking that will be a problem for TCP, but, in somehow it will
be, less throughput as I showed before, and less SACK. About the VPN,
please, tell me which softwares, and let me know where I can get a sample
to make a testbed.

However to be very honest, I don't believe anyone here when change
something at network protocols will make this extensive testbed. It is
almost impossible to predict what software it will works or not, and I
don't believe anyone here has all these stuff in hands.



 The ixgbe driver is setting the flowid to the msix queue ID, rather
 than a 32 bit unique flow id hash value for the flow. That makes it
 hard to do traffic distribution where the flowid is available.


Thanks for the explanation.



 There's an lagg option to re-hash the mbuf rather than rely on the
 flowid for outbound port choice - have you looked at using that? Did
 that make any difference?


Yes, I set to 0 the net.link.lagg.0.use _flowid, it make a little
difference to the default round robin implementation, but yet I can't reach
more than 5 Gbit/s. With my patch and set the packets to 50, it improved a
bit too.

So, thank you so much for all review, I don't know if you have time and a
testbed to make a real test, as I'm doing. I would be happy if you or more
people could make tests on that patch. Also, I have only ixgbe(4) to make
tests, would appreciate if this patch could be tested with other NICs too.

Best Regards,

-- 
Marcelo Araujo(__)ara...@freebsd.org
\\\'',)http://www.FreeBSD.org http://www.freebsd.org/   \/  \ ^
Power To Server. .\. /_)
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: [patch][lagg] - Set a better granularity and distribution on roundrobin protocol.

2014-06-22 Thread Adrian Chadd
...

It's an interesting idea, but doing round robin like that may
introduce out of order packets.

What's the actual problem you're seeing? Are the transmit queues
filling up? Is the distribution with flowid/curcpu not good enough?

Scott saw this happen at Netflix. He added a lagg twiddle to set which
set of bits to care about in the flowid when picking an interface to
choose. The ixgbe hashing was being done on the low x bits, where x is
related to how many CPUs you have (2 CPUs? 1 bit. 8 CPUs? 3 bits.
etc.) lagg was doing the same thing on the same low order set of bits.
He modified lagg so you could pick some new starting point a few bits
up in the flowid to pick a lagg interface with. That fixed the
distribution issue and also kept the in-orderness of it all.

2c,


-a

On 22 June 2014 19:27, Marcelo Araujo araujobsdp...@gmail.com wrote:
 Hello guys,

 I made some changes on roundrobin protocol where from now you can via
 sysctl(8) set a better packets distribution among the interfaces that are
 part of the lagg(4) group.

 My motivation for this change was interfaces that use TSO, as example
 ixgbe(4), the performance is terrible, as we can't full fill the TSO buffer
 at once, the throughput drops expressively and we have much more sack
 between hosts.

 So, with this patch we can set the number of packets that will be send
 before switch to the next interface.

 In my testbed using ixgbe(4), I had a very good performance as you can see
 bellow:

 1) Without patch:
 
 Client connecting to 192.168.1.2, TCP port 5001
 TCP window size: 32.5 KByte (default)
 
 [  3] local 192.168.1.1 port 32808 connected with 192.168.1.2 port 5001
 [ ID] Interval   Transfer Bandwidth
 [  3]  0.0- 1.0 sec   406 MBytes  3.40 Gbits/sec
 [  3]  1.0- 2.0 sec   391 MBytes  3.28 Gbits/sec
 [  3]  2.0- 3.0 sec   406 MBytes  3.41 Gbits/sec
 [  3]  3.0- 4.0 sec   585 MBytes  4.91 Gbits/sec
 [  3]  4.0- 5.0 sec   477 MBytes  4.00 Gbits/sec
 [  3]  5.0- 6.0 sec   429 MBytes  3.60 Gbits/sec
 [  3]  6.0- 7.0 sec   520 MBytes  4.36 Gbits/sec
 [  3]  7.0- 8.0 sec   385 MBytes  3.23 Gbits/sec
 [  3]  8.0- 9.0 sec   414 MBytes  3.48 Gbits/sec
 [  3]  9.0-10.0 sec   515 MBytes  4.32 Gbits/sec
 [  3]  0.0-10.0 sec  4.42 GBytes  3.80 Gbits/sec

 2) With patch:
 
 Client connecting to 192.168.1.2, TCP port 5001
 TCP window size: 32.5 KByte (default)
 
 [  3] local 192.168.1.1 port 10526 connected with 192.168.1.2 port 5001
 [ ID] Interval   Transfer Bandwidth
 [  3]  0.0- 1.0 sec   694 MBytes  5.83 Gbits/sec
 [  3]  1.0- 2.0 sec   999 MBytes  8.38 Gbits/sec
 [  3]  2.0- 3.0 sec  1.17 GBytes  10.1 Gbits/sec
 [  3]  3.0- 4.0 sec  1.34 GBytes  11.5 Gbits/sec
 [  3]  4.0- 5.0 sec  1.15 GBytes  9.91 Gbits/sec
 [  3]  5.0- 6.0 sec  1.19 GBytes  10.2 Gbits/sec
 [  3]  6.0- 7.0 sec  1.08 GBytes  9.23 Gbits/sec
 [  3]  7.0- 8.0 sec  1.10 GBytes  9.45 Gbits/sec
 [  3]  8.0- 9.0 sec  1.27 GBytes  10.9 Gbits/sec
 [  3]  9.0-10.0 sec  1.39 GBytes  12.0 Gbits/sec
 [  3]  0.0-10.0 sec  11.3 GBytes  9.74 Gbits/sec

 So, basically we have a sysctl(8) called net.link.lagg.rr_packets where
 we can set the number of packets that will be send before the roundrobin
 move to the next interface.

 Any comment and review are very appreciated.

 Best Regards,

 --
 Marcelo Araujo(__)ara...@freebsd.org
 \\\'',)http://www.FreeBSD.org http://www.freebsd.org/   \/  \ ^
 Power To Server. .\. /_)

 ___
 freebsd-net@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-net
 To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org