mjoffre,

Do you get any errors (show errors)?
How does the binding table look like?
Tried tracing packets through the graph?

Cheers,
Ole

> On 3 Dec 2025, at 07:05, mjoffre via lists.fd.io 
> <[email protected]> wrote:
> 
> 
> Hey everyone,
> 
> I'm currently using the NAT64 plugin in VPP and am experiencing severe 
> latency issues when connecting to IPv6→IPv4 destinations. As shown below, 
> throughput is extremely low and packet retransmissions are high:
> 
> iperf3 -c spd-uswb.hostkey.com -p 5205
> Connecting to host spd-uswb.hostkey.com, port 5205
> [  5] local 2604:2dc0:400:c001:0:2:0:1c port 40092 connected to 
> 64:ff9b::8b3c:a058 port 5205
> [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
> [  5]   0.00-1.00   sec  58.6 KBytes   480 Kbits/sec   24   2.79 KBytes
> [  5]   1.00-2.00   sec  68.3 KBytes   560 Kbits/sec    9   6.97 KBytes
> [  5]   2.00-3.00   sec  41.8 KBytes   343 Kbits/sec   12   4.18 KBytes
> [  5]   3.00-4.00   sec  85.1 KBytes   697 Kbits/sec    8   4.18 KBytes
> [  5]   4.00-5.00   sec  40.4 KBytes   331 Kbits/sec    8   4.18 KBytes
> [  5]   5.00-6.00   sec  41.8 KBytes   343 Kbits/sec    7   5.58 KBytes
> [  5]   6.00-7.00   sec  41.8 KBytes   343 Kbits/sec    8   5.58 KBytes
> [  5]   7.00-8.00   sec  40.4 KBytes   331 Kbits/sec    8   5.58 KBytes
> [  5]   8.00-9.00   sec  83.7 KBytes   686 Kbits/sec    9   1.39 KBytes
> [  5]   9.00-10.00  sec  41.8 KBytes   343 Kbits/sec    7   1.39 KBytes
> -
> [  5]   0.00-10.00  sec   544 KBytes   446 Kbits/sec  100             sender
> [  5]   0.00-10.04  sec   483 KBytes   394 Kbits/sec                  receiver
> When running IPv6→IPv6 traffic, I can easily achieve >1 Gbps throughput, so 
> the issue appears specific to NAT64 processing. I'm not sure what might be 
> causing the degradation and would appreciate any pointers to help diagnose or 
> resolve this.
> 
> For reference, here is my VPP configuration (v25.10):
> 
> unix {
>   nosyslog
>   nodaemon
>   log /var/log/vpp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
>   startup-config /etc/vpp/setup.gate
> }
> 
> vhost-user {
>   dont-dump-memory
> }
> 
> api-trace {
>   on
> }
> 
> statseg {
>   size 1024M
>   per-node-counters on
> }
> 
> socksvr {
>   default
>   socket-name /run/vpp/api.sock
> }
> 
> cpu {
>   main-core 4
>   corelist-workers 5-6,68-71
> }
> 
> buffers {
>   buffers-per-numa 256000
>   page-size 1G
> }
> 
> dpdk {
>   log-level notice
> 
>   dev 0000:81:00.0
> }
> 
> memory {
>   main-heap-size 16G
>   main-heap-page-size 1G
>   default-hugepage-size 1G
> }
> 
> ip6 {
>   heap-size 4G
>   hash-buckets 2097152
> }
> 
> punt {
>   socket /run/vpp/punt.sock
> }
> 
> plugins {
>   plugin default { enable }
> 
>   plugin dpdk_plugin.so { enable }
>   plugin nat_plugin.so { enable }
>   plugin acl_plugin.so { enable }
>   plugin memif_plugin.so { enable }
> 
>   plugin cdp_plugin.so { disable }
>   plugin gtpu_plugin.so { disable }
>   plugin l2e_plugin.so { disable }
>   plugin igmp_plugin.so { disable }
>   plugin stn_plugin.so { disable }
> }
> 
> logging {
>   default-log-level info
>   default-syslog-log-level notice
> 
>   class dpdk { rate-limit 100 level notice }
>   class interface { rate-limit 1000 level info }
>   class nat { rate-limit 100 level info }
> }
> Thanks in advance for any guidance!
> 
> 
> 
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#26607): https://lists.fd.io/g/vpp-dev/message/26607
Mute This Topic: https://lists.fd.io/mt/116590423/21656
Mute #ipv6:https://lists.fd.io/g/vpp-dev/mutehashtag/ipv6
Mute #nat:https://lists.fd.io/g/vpp-dev/mutehashtag/nat
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/14379924/21656/631435203/xyzzy 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to