Sounds like the issue fixed in [0] (so after 21.06).

Vratko.

[0] https://gerrit.fd.io/r/c/vpp/+/33018

From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of g.good...@gmail.com
Sent: Thursday, 2021-August-19 14:58
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp got stucked after bridge and loop interfaces is created 
and snat is configured #nat44


vpp version: 21.06
vpp main core will be stucked after bridge and loop interfaces and snat is 
configured, here is my topology.

/--------------\            /--------------\            /--------------\

|              |            |              |            |              |

|    client  enp0s8 ---- GE0/2/0   vpp  GE0/5/0 ---- enp0s10  server   |

|              |            |              |            |              |

\--------------/            \--------------/            \--------------/



               192.0.2.0/24                192.168.3.0/24
and here is my configuration:
nat44 enable
nat44 forwarding enable
nat44 add int address GigabitEthernet5/0/0
set int nat44 in GigabitEthernet2/0/0 out GigabitEthernet5/0/0 output-feature
create tap id 0
set interface state tap0 up
set int l2 bridge GigabitEthernet2/0/0 1
set int l2 bridge tap0 1
create loopback interface
set int l2 bridge loop0 1 bvi
set int ip addr loop0 192.0.2.11/24
set int state loop0 up

vpp will be stucked after a few ping from client to server, here is backtrace 
info in gdb:
#0  0x00007f980557f0d1 in internal_mallinfo (m=0x7f97bb18b040) at 
/usr/src/debug/vpp-0.1/src/vppinfra/dlmalloc.c:2099
#1  0x00007f98055707d7 in mspace_mallinfo (msp=<optimized out>) at 
/usr/src/debug/vpp-0.1/src/vppinfra/dlmalloc.c:4803
#2  clib_mem_get_heap_usage (heap=<optimized out>, 
usage=usage@entry=0x7f97bb05df40) at 
/usr/src/debug/vpp-0.1/src/vppinfra/mem_dlmalloc.c:475
#3  0x000055c1903304fa in do_stat_segment_updates (sm=0x55c1903c7ac0 
<stat_segment_main>) at /usr/src/debug/vpp-0.1/src/vpp/stats/stat_segment.c:661
#4  stat_segment_collector_process (vm=0x7f98056b2680 <vlib_global_main>, 
rt=<optimized out>, f=<optimized out>) at 
/usr/src/debug/vpp-0.1/src/vpp/stats/stat_segment.c:761
#5  0x00007f9805648897 in vlib_process_bootstrap (_a=<optimized out>) at 
/usr/src/debug/vpp-0.1/src/vlib/main.c:1477
#6  0x00007f9805587d80 in clib_calljmp () from /lib64/libvppinfra.so.0.1
#7  0x00007f97bd38add0 in ?? ()

after debugs i found the reason, vpp counts the packets through snat, it store 
the result in nm->counters.fastpath.in2out.icmp, which is a vector struct, the 
size of the vector is based on interfaces index, based on my configures above, 
the size is 3, but after i configured loop and bridge interfaces, both the new 
interface index is bigger than 3. when packets pass through snat, it thought 
packet is received from loop interface, and then got out of bonds when writing 
vector.
and my question is:
1.based on my configuration above, does packets counts saved to loop interface 
is correct?
2.Besides avoiding misconfiguration, how to fix it?
thanks a lot.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19986): https://lists.fd.io/g/vpp-dev/message/19986
Mute This Topic: https://lists.fd.io/mt/84995915/21656
Mute #nat44:https://lists.fd.io/g/vpp-dev/mutehashtag/nat44
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

  • ... g . goodian
    • ... Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via lists.fd.io
      • ... g . goodian
    • ... Matthew Smith via lists.fd.io
      • ... g . goodian

Reply via email to