Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-07 Thread Florin Coras
Yes, the whole host stack uses shared memory segments and fifos that the session layer manages. For a brief description of the session layer see [1, 2]. Apart from that, unfortunately, we don’t have any other dev documentation. src/vnet/session/segment_manager.[ch] has some good examples of how

Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-07 Thread Luca Muscariello (lumuscar)
Florin, So the TCP stack does not connect to VPP using memif. I’ll check the shared memory you mentioned. For our transport stack we’re using memif. Nothing to do with TCP though. Iperf3 to VPP there must be copies anyway. There must be some batching with timing though while doing these copies.

Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-07 Thread Florin Coras
Hi Luca, I guess, as you did, that it’s vectorization. VPP is really good at pushing packets whereas Linux is good at using all hw optimizations. The stack uses it’s own shared memory mechanisms (check svm_fifo_t) but given that you did the testing with iperf3, I suspect the edge is not there.

Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-07 Thread Luca Muscariello (lumuscar)
Hi Florin Thanks for the info. So, how do you explain VPP TCP stack beats Linux implementation by doubling the goodput? Does it come from vectorization? Any special memif optimization underneath? Luca On 7 May 2018, at 18:17, Florin Coras mailto:fcoras.li...@gmail.com>> wrote: Hi Luca, We do

Re: [vpp-dev] NOTIFICATION: FD.io Maintenance

2018-05-07 Thread Vanessa Valderrama
*Reminder of upcoming maintenance* On 04/26/2018 10:02 AM, Vanessa Valderrama wrote: > > Please let me know if the following maintenance schedule conflicts > with your projects. > > *Nexus 2018-05-08 - **1700 UTC* > > The Nexus maintenance will require a restart of Nexus to bump up the > JVM and

Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-07 Thread Florin Coras
Hi Luca, We don’t yet support TSO because it requires support within all of vpp (think tunnels). Still, it’s on our list. As for crypto offload, we do have support for IPSec offload with QAT cards and we’re now working with Ping and Ray from Intel on accelerating the TLS OpenSSL engine also

[vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-07 Thread Luca Muscariello
Hi, A few questions about the TCP stack and HW offloading. Below is the experiment under test.   ++  +---+   |  +-+ DPDK-10GE|   |   |Iperf3| TCP |  ++  |TCP   Iperf3   |  +---

[vpp-dev] nat

2018-05-07 Thread Gulakh
Hi, I have used linux iptables for NAT, but now I want VPP nat. could you please give me some examples corresponding to the iptables one on how to use VPP nat? e.g. iptables rule : ,vpp rule: yy thanks

Re: [vpp-dev] How to add plugin's statistics into stats thread

2018-05-07 Thread Ni, Hongjun
Hi Jerome, We would like to add LB plugin statistics, including per- VIP connections and per-AS connections for each VIP. Frequency is configurable, 1 second is better. Data of volume depends on the number of VIPs and Ass. Please refer to below patch for details: https://gerrit.fd.io/r/#/c/12146/

Re: [vpp-dev] How to add plugin's statistics into stats thread

2018-05-07 Thread Jerome Tollet
Hi Hongjun, Could you elaborate a bit on the kind of statistics you’d like to create? Frequency and volume of data may be interesting inputs. Jerome De : au nom de "Ni, Hongjun" Date : lundi 7 mai 2018 à 07:43 À : vpp-dev Cc : "Mori, Naoyuki" , Yusuke Tatsumi Objet : [vpp-dev] How to add plug