Hi there, We are trying to use VPP cut-through feature (version 20.09). What confuses me is why separate fifo segments are used for each connection ? In our case, we establish a lot of cut-through connections, and we find that VPP send a lot of file descriptors to VCL
Hi Hyong,
It ultimately depends on what “function” does and how you plan to solve worker
contention for whatever resources need to be shared. Assuming that 1)
“function” must be executed asynchronously and 2) the node forwards the packets
after it copies the interesting data out of them, some
Hi,
Say we will have Plugin A that needs to:
1. Receive packets at an interface
2. Update some state about received packets
3. Send the updated state to another "function" for further processing
4. Forward received packets to the next node
What would be the best practice for the design of Step
Thanks Neale. I will try it out.
From: Neale Ranns
Sent: Thursday, February 25, 2021 3:16 AM
To: Govindarajan Mohandoss ; vpp-dev
Cc: nd
Subject: Re: [vpp-dev] IPSec ESP Tunnel mode config
Hi Govind,
Please see:
https://wiki.fd.io/view/VPP/IPSec
/neale
From: Govindarajan Mohandoss
Hi List,
We have been using policer_add_del and classify_add_del_session in single
threaded VPP (ie one main thread only) and both API were giving decent
performance, but after switching to multi thread VPP the performance seems be
drastically less.
To test this out a small test program was
Hi Elias,
Thank you ! Actually I've build from binaries. Could you please provide me the
link of 20.05 repository?
About the line "Note that this can help with one specific kind of packet drops
in VPP NAT called "congestion drops"" would mind give me instructions on how
troubleshooting VPP
Hi Govind,
Please see:
https://wiki.fd.io/view/VPP/IPSec
/neale
From: Govindarajan Mohandoss
Date: Wednesday, 24 February 2021 at 20:34
To: Govindarajan Mohandoss , Neale Ranns
, vpp-dev
Cc: nd , nd
Subject: RE: [vpp-dev] IPSec ESP Tunnel mode config
Hi Neale,
I was wrong. I did a
Hi Marcos,
If you are building VPP 20.05 from source then the easiest way is to
simply change the value at "#define NAT_FQ_NELTS 64"
in src/plugins/nat/nat.h from 64 to something larger, we have been
using 512 which seems to work fine in our case.
Note that this can help with one specific kind