Yes, the whole host stack uses shared memory segments and fifos that the
session layer manages. For a brief description of the session layer see [1, 2].
Apart from that, unfortunately, we don’t have any other dev documentation.
src/vnet/session/segment_manager.[ch] has some good examples of how
Florin,
So the TCP stack does not connect to VPP using memif.
I’ll check the shared memory you mentioned.
For our transport stack we’re using memif. Nothing to
do with TCP though.
Iperf3 to VPP there must be copies anyway.
There must be some batching with timing though
while doing these copies.
Hi Luca,
I guess, as you did, that it’s vectorization. VPP is really good at pushing
packets whereas Linux is good at using all hw optimizations.
The stack uses it’s own shared memory mechanisms (check svm_fifo_t) but given
that you did the testing with iperf3, I suspect the edge is not there.
Hi Florin
Thanks for the info.
So, how do you explain VPP TCP stack beats Linux
implementation by doubling the goodput?
Does it come from vectorization?
Any special memif optimization underneath?
Luca
On 7 May 2018, at 18:17, Florin Coras
mailto:fcoras.li...@gmail.com>> wrote:
Hi Luca,
We do
*Reminder of upcoming maintenance*
On 04/26/2018 10:02 AM, Vanessa Valderrama wrote:
>
> Please let me know if the following maintenance schedule conflicts
> with your projects.
>
> *Nexus 2018-05-08 - **1700 UTC*
>
> The Nexus maintenance will require a restart of Nexus to bump up the
> JVM and
Hi Luca,
We don’t yet support TSO because it requires support within all of vpp (think
tunnels). Still, it’s on our list.
As for crypto offload, we do have support for IPSec offload with QAT cards and
we’re now working with Ping and Ray from Intel on accelerating the TLS OpenSSL
engine also
Hi,
A few questions about the TCP stack and HW offloading.
Below is the experiment under test.
++ +---+
| +-+ DPDK-10GE| |
|Iperf3| TCP | ++ |TCP Iperf3
| +---
Hi,
I have used linux iptables for NAT, but now I want VPP nat.
could you please give me some examples corresponding to the iptables one on
how to use VPP nat?
e.g. iptables rule : ,vpp rule: yy
thanks
Hi Jerome,
We would like to add LB plugin statistics, including per- VIP connections and
per-AS connections for each VIP.
Frequency is configurable, 1 second is better.
Data of volume depends on the number of VIPs and Ass.
Please refer to below patch for details:
https://gerrit.fd.io/r/#/c/12146/
Hi Hongjun,
Could you elaborate a bit on the kind of statistics you’d like to create?
Frequency and volume of data may be interesting inputs.
Jerome
De : au nom de "Ni, Hongjun"
Date : lundi 7 mai 2018 à 07:43
À : vpp-dev
Cc : "Mori, Naoyuki" , Yusuke Tatsumi
Objet : [vpp-dev] How to add plug
10 matches
Mail list logo