Re: [vpp-dev] VRF-aware bypass nodes

2020-03-02 Thread Nick Zavaritsky
names, etc. Regards, John From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> mailto:vpp-dev@lists.fd.io>> On Behalf Of Nick Zavaritsky Sent: Wednesday, February 26, 2020 12:23 PM To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> Subject: [vpp-dev] VRF-aware bypass nodes

[vpp-dev] gtpu: breaking changes proposal: support different source and destination TEIDs

2020-02-26 Thread Nick Zavaritsky
Hi, According to 3gpp 29.281, TEIDs used in a GTPU tunnel could be different in "up" and "down" direction. At EMnify we depend on this feature. We developed a patch but it changes API in incompatible ways, excerpt: @@ -38,8 +39,9 @@ define gtpu_add_del_tunnel vl_api_interface_index_t

[vpp-dev] VRF-aware bypass nodes

2020-02-26 Thread Nick Zavaritsky
Hi, There are multiple kinds of bypass nodes in vpp. Bypass nodes intercept packets matching certain criteria and pass them directly to the protocol handler node. I am going to use GTPU as the illustrating example. Bypass node SHOULD intercept packets with destination IP matching a local

[vpp-dev] Reporting stale tunnel via LINK DOWN event — ok or not?

2020-03-10 Thread Nick Zavaritsky
Hi, at EMnify we are working to extend GTPU support with features required in mobile networks.  According to GTPU specification, if a received packet doesn't match an existing tunnel (unknown DIP+TEID), error indication is sent back.  When an error indication is received, we have to tell the

[vpp-dev] DPO leak in various tunnel types (gtpu, geneve, vxlan, ...)

2020-04-21 Thread Nick Zavaritsky
Dear VPP hackers, We are spawning and destroying GTPU tunnels at a high rate. Only 10K tunnels ever exist simultaneously in our test. With default settings, we observe out of memory error in load_balance_create after approximately .5M tunnel create commands. Apparently, load balancers are

Re: [vpp-dev] DPO leak in various tunnel types (gtpu, geneve, vxlan, ...)

2020-05-06 Thread Nick Zavaritsky
before I merge. /neale From: mailto:vpp-dev@lists.fd.io>> on behalf of Nick Zavaritsky mailto:nick.zavarit...@emnify.com>> Date: Tuesday 21 April 2020 at 14:57 To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" mailto:vpp-dev@lists.fd.io>> Subject: [vpp-dev

[vpp-dev] Q about VPP NAT

2020-09-10 Thread Nick Zavaritsky
Dear VPP hackers, I need your advice concerning configuring and possibly extremely ending the NAT in VPP. We are currently using nat44 in endpoint-dependent mode. We are witnessing TCP sessions piling up even though clients close connections gracefully. These lingering sessions are

[vpp-dev] How to monitor memory usage in VPP?

2020-11-23 Thread Nick Zavaritsky
Dear VPP hackers, As far as I am aware, the only way to get memory usage is via `show memory` CLI. Ligato does exactly that. Stats segment exposes the memory usage in the stats segment itself. This is implemented by asking mspace about the memory usage periodically. Internally, space_mallinfo

Re: [vpp-dev] Core Load Calculation

2020-12-15 Thread Nick Zavaritsky
We’ve been using flamegraphs quite successfully to see how much leeway is there. Ex: https://gist.githack.com/mejedi/d5d094df63faba66d413a677fcef26e3/raw/95294d36c4b180ba6741d793bf345041b00af48e/g.svg On 15 Dec 2020, at 19:53, Ramkumar B via lists.fd.io wrote: Hello All, I'm trying to

Re: [vpp-dev] Bihash is considered thread-safe but probably shouldn't

2021-11-04 Thread Nick Zavaritsky
t it’s likely to make the pain go away. Bucket-level reader-locks would involve adding Avogadro’s number of atomic ops to the predominant case. I’m pretty sure that’s a non-starter. FWIW... Dave From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> mailto:vpp-dev@lists.fd.io>> On

[vpp-dev] Bihash is considered thread-safe but probably shouldn't

2021-11-01 Thread Nick Zavaritsky
Hello bihash experts! There's an old thread claiming that bihash lookup can produce a value=-1 under intense add/delete concurrent activity: https://lists.fd.io/g/vpp-dev/message/15606 We had a seemingly related crash recently when a lookup in snat_main.flow_hash yielded a value=-1 which was