names, etc.
Regards,
John
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
mailto:vpp-dev@lists.fd.io>> On Behalf Of Nick Zavaritsky
Sent: Wednesday, February 26, 2020 12:23 PM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] VRF-aware bypass nodes
Hi,
According to 3gpp 29.281, TEIDs used in a GTPU tunnel could be different in
"up" and "down" direction. At EMnify we depend on this feature.
We developed a patch but it changes API in incompatible ways, excerpt:
@@ -38,8 +39,9 @@ define gtpu_add_del_tunnel
vl_api_interface_index_t
Hi,
There are multiple kinds of bypass nodes in vpp. Bypass nodes intercept packets
matching certain criteria and pass them directly to the protocol handler node.
I am going to use GTPU as the illustrating example.
Bypass node SHOULD intercept packets with destination IP matching a local
Hi,
at EMnify we are working to extend GTPU support with features required
in mobile networks. According to GTPU specification, if a received packet
doesn't match an existing tunnel (unknown DIP+TEID), error indication is
sent back. When an error indication is received, we have to tell the
Dear VPP hackers,
We are spawning and destroying GTPU tunnels at a high rate. Only 10K tunnels
ever exist simultaneously in our test.
With default settings, we observe out of memory error in load_balance_create
after approximately .5M tunnel create commands. Apparently, load balancers are
before I merge.
/neale
From: mailto:vpp-dev@lists.fd.io>> on behalf of Nick
Zavaritsky mailto:nick.zavarit...@emnify.com>>
Date: Tuesday 21 April 2020 at 14:57
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>"
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev
Dear VPP hackers,
I need your advice concerning configuring and possibly extremely ending the NAT
in VPP.
We are currently using nat44 in endpoint-dependent mode. We are witnessing TCP
sessions piling up even though clients close connections gracefully. These
lingering sessions are
Dear VPP hackers,
As far as I am aware, the only way to get memory usage is via `show memory`
CLI. Ligato does exactly that.
Stats segment exposes the memory usage in the stats segment itself. This is
implemented by asking mspace about the memory usage periodically. Internally,
space_mallinfo
We’ve been using flamegraphs quite successfully to see how much leeway is there.
Ex:
https://gist.githack.com/mejedi/d5d094df63faba66d413a677fcef26e3/raw/95294d36c4b180ba6741d793bf345041b00af48e/g.svg
On 15 Dec 2020, at 19:53, Ramkumar B via lists.fd.io
wrote:
Hello All,
I'm trying to
t it’s likely to make the pain
go away.
Bucket-level reader-locks would involve adding Avogadro’s number of atomic ops
to the predominant case. I’m pretty sure that’s a non-starter.
FWIW... Dave
From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
mailto:vpp-dev@lists.fd.io>> On
Hello bihash experts!
There's an old thread claiming that bihash lookup can produce a value=-1 under
intense add/delete concurrent activity:
https://lists.fd.io/g/vpp-dev/message/15606
We had a seemingly related crash recently when a lookup in snat_main.flow_hash
yielded a value=-1 which was
11 matches
Mail list logo