Re: [vpp-dev] VPP LCP Route Not Reflecting After Interface State Change

2023-03-26 Thread Christopher Adigun
I was not able to use *linux_cp/linux_nl* (vpp 23.02-release) because with
the same config, the interfaces are not even coming up, that is why I
switched back  to *lcpng_if_plugin and lcpng_nl_plugin*

On Sun, Mar 26, 2023 at 11:38 AM Pim van Pelt via lists.fd.io  wrote:

> Hoi,
>
> Does the same behavior happen with linux_cp_plugin and linux_nl_plugin
> enabled instead? I saw a change in linux_cp that will walk the fib and
> remove routes associated with down interfaces; and I think this is because
> FRR does not remove them while Bird does. Can you try with
> linux_cp/linux_nl please?
>
> Relevant reading:
> https://gerrit.fd.io/r/c/vpp/+/35529
> https://gerrit.fd.io/r/c/vpp/+/35530
> https://gerrit.fd.io/r/c/vpp/+/35531
>
> groet,
> Pim
>
> On Sun, Mar 26, 2023 at 5:33 PM Christopher Adigun 
> wrote:
>
>>
>> *Hi,*
>>
>> *I am facing an issue with the LCP route update when a particular
>> interface changes state (i.e. when I manually shut it down and when I bring
>> it up).*
>>
>> *Before shutting interface down, below is the state both in FRR and Linux
>> table:*
>>
>> *FRR table:*
>>
>> ingress-node-vpp-58dcb69b5f-g9rzp:/# vtysh
>>
>> Hello, this is FRRouting (version 8.5_git).
>> Copyright 1996-2005 Kunihiro Ishiguro, et al.
>>
>> ingress-node-vpp-58dcb69b5f-g9rzp# show ip route
>> Codes: K - kernel route, C - connected, S - static, R - RIP,
>>O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
>>T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
>>f - OpenFabric,
>>> - selected route, * - FIB route, q - queued, r - rejected, b -
>> backup
>>t - trapped, o - offload failure
>>
>> K>* 0.0.0.0/0 [0/0] via 169.254.1.1, eth0, 00:29:12
>> C>* 10.0.0.142/32 is directly connected, eth0, 00:29:12
>> O   10.0.5.0/24 [110/10] is directly connected, dpdk0, weight 1, 00:29:12
>> C>* 10.0.5.0/24 is directly connected, dpdk0, 00:29:12
>>
>> *O>* 10.0.9.0/24  [110/40] via 192.168.100.2, gre0,
>> weight 1, 00:24:19  *  via 192.168.100.6, gre1, weight
>> 1, 00:24:19*
>> K>* 169.254.1.1/32 [0/0] is directly connected, eth0, 00:29:12
>> O   192.168.100.0/30 [110/10] is directly connected, gre0, weight 1,
>> 00:24:31
>> C>* 192.168.100.0/30 is directly connected, gre0, 00:24:31
>> O   192.168.100.4/30 [110/10] is directly connected, gre1, weight 1,
>> 00:24:33
>> C>* 192.168.100.4/30 is directly connected, gre1, 00:24:33
>> O>* 192.168.100.8/30 [110/20] via 192.168.100.2, gre0, weight 1, 00:24:19
>> O>* 192.168.100.12/30 [110/20] via 192.168.100.6, gre1, weight 1,
>> 00:24:19
>> O>* 192.168.100.16/30 [110/30] via 192.168.100.2, gre0, weight 1,
>> 00:24:19
>> O>* 192.168.100.20/30 [110/30] via 192.168.100.6, gre1, weight 1,
>> 00:24:19
>>
>> *Linux Table:*
>> ingress-node-vpp-58dcb69b5f-g9rzp:/# ip r
>> default via 169.254.1.1 dev eth0
>> 10.0.5.0/24 dev dpdk0 proto kernel scope link src 10.0.5.16
>>
>>
>> *10.0.9.0/24  nhid 22 proto ospf metric 20
>> nexthop via 192.168.100.2 dev gre0 weight 1nexthop via
>> 192.168.100.6 dev gre1 weight 1*
>> 169.254.1.1 dev eth0 scope link
>> 192.168.100.0/30 dev gre0 proto kernel scope link src 192.168.100.1
>> 192.168.100.4/30 dev gre1 proto kernel scope link src 192.168.100.5
>> 192.168.100.8/30 nhid 18 via 192.168.100.2 dev gre0 proto ospf metric 20
>> 192.168.100.12/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
>> 192.168.100.16/30 nhid 18 via 192.168.100.2 dev gre0 proto ospf metric 20
>> 192.168.100.20/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
>>
>> *When I manually shut gre0 down, it is removed from the nexthop:*
>>
>> ingress-node-vpp-58dcb69b5f-g9rzp:/# vtysh
>>
>> Hello, this is FRRouting (version 8.5_git).
>> Copyright 1996-2005 Kunihiro Ishiguro, et al.
>>
>> ingress-node-vpp-58dcb69b5f-g9rzp#
>> ingress-node-vpp-58dcb69b5f-g9rzp#
>> ingress-node-vpp-58dcb69b5f-g9rzp# show ip route
>> Codes: K - kernel route, C - connected, S - static, R - RIP,
>>O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
>>T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
>>f - OpenFabric,
>>> - selected route, * - FIB route, q - queued, r - rejected, b -
>> backup
>>t - trapped, o - offload failure
>>
>> K>* 0.0.0.0/0 [0/0] via 169.254.1.1, eth0, 00:37:00
>> C>* 10.0.0.142/32 is directly connected, eth0, 00:37:00
>> O   10.0.5.0/24 [110/10] is directly connected, dpdk0, weight 1, 00:37:00
>> C>* 10.0.5.0/24 is directly connected, dpdk0, 00:37:00
>> *O>* 10.0.9.0/24  [110/40] via 192.168.100.6, gre1,
>> weight 1, 00:00:06*
>> K>* 169.254.1.1/32 [0/0] is directly connected, eth0, 00:37:00
>> O>* 192.168.100.0/30 [110/60] via 192.168.100.6, gre1, weight 1, 00:00:06
>> O   192.168.100.4/30 [110/10] is directly connected, gre1, weight 1,
>> 00:32:21
>> C>* 192.168.100.4/30 is directly connected, gre1, 00:32:21
>> O>* 192.168.100.8/30 [110/50] via 192.168.100.6, gre1, weight 1, 00:00:06
>> O>* 

Re: [vpp-dev] VPP LCP Route Not Reflecting After Interface State Change

2023-03-26 Thread Pim van Pelt via lists.fd.io
Hoi,

Does the same behavior happen with linux_cp_plugin and linux_nl_plugin
enabled instead? I saw a change in linux_cp that will walk the fib and
remove routes associated with down interfaces; and I think this is because
FRR does not remove them while Bird does. Can you try with
linux_cp/linux_nl please?

Relevant reading:
https://gerrit.fd.io/r/c/vpp/+/35529
https://gerrit.fd.io/r/c/vpp/+/35530
https://gerrit.fd.io/r/c/vpp/+/35531

groet,
Pim

On Sun, Mar 26, 2023 at 5:33 PM Christopher Adigun 
wrote:

>
> *Hi,*
>
> *I am facing an issue with the LCP route update when a particular
> interface changes state (i.e. when I manually shut it down and when I bring
> it up).*
>
> *Before shutting interface down, below is the state both in FRR and Linux
> table:*
>
> *FRR table:*
>
> ingress-node-vpp-58dcb69b5f-g9rzp:/# vtysh
>
> Hello, this is FRRouting (version 8.5_git).
> Copyright 1996-2005 Kunihiro Ishiguro, et al.
>
> ingress-node-vpp-58dcb69b5f-g9rzp# show ip route
> Codes: K - kernel route, C - connected, S - static, R - RIP,
>O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
>T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
>f - OpenFabric,
>> - selected route, * - FIB route, q - queued, r - rejected, b -
> backup
>t - trapped, o - offload failure
>
> K>* 0.0.0.0/0 [0/0] via 169.254.1.1, eth0, 00:29:12
> C>* 10.0.0.142/32 is directly connected, eth0, 00:29:12
> O   10.0.5.0/24 [110/10] is directly connected, dpdk0, weight 1, 00:29:12
> C>* 10.0.5.0/24 is directly connected, dpdk0, 00:29:12
>
> *O>* 10.0.9.0/24  [110/40] via 192.168.100.2, gre0,
> weight 1, 00:24:19  *  via 192.168.100.6, gre1, weight
> 1, 00:24:19*
> K>* 169.254.1.1/32 [0/0] is directly connected, eth0, 00:29:12
> O   192.168.100.0/30 [110/10] is directly connected, gre0, weight 1,
> 00:24:31
> C>* 192.168.100.0/30 is directly connected, gre0, 00:24:31
> O   192.168.100.4/30 [110/10] is directly connected, gre1, weight 1,
> 00:24:33
> C>* 192.168.100.4/30 is directly connected, gre1, 00:24:33
> O>* 192.168.100.8/30 [110/20] via 192.168.100.2, gre0, weight 1, 00:24:19
> O>* 192.168.100.12/30 [110/20] via 192.168.100.6, gre1, weight 1, 00:24:19
> O>* 192.168.100.16/30 [110/30] via 192.168.100.2, gre0, weight 1, 00:24:19
> O>* 192.168.100.20/30 [110/30] via 192.168.100.6, gre1, weight 1, 00:24:19
>
> *Linux Table:*
> ingress-node-vpp-58dcb69b5f-g9rzp:/# ip r
> default via 169.254.1.1 dev eth0
> 10.0.5.0/24 dev dpdk0 proto kernel scope link src 10.0.5.16
>
>
> *10.0.9.0/24  nhid 22 proto ospf metric 20
> nexthop via 192.168.100.2 dev gre0 weight 1nexthop via
> 192.168.100.6 dev gre1 weight 1*
> 169.254.1.1 dev eth0 scope link
> 192.168.100.0/30 dev gre0 proto kernel scope link src 192.168.100.1
> 192.168.100.4/30 dev gre1 proto kernel scope link src 192.168.100.5
> 192.168.100.8/30 nhid 18 via 192.168.100.2 dev gre0 proto ospf metric 20
> 192.168.100.12/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
> 192.168.100.16/30 nhid 18 via 192.168.100.2 dev gre0 proto ospf metric 20
> 192.168.100.20/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
>
> *When I manually shut gre0 down, it is removed from the nexthop:*
>
> ingress-node-vpp-58dcb69b5f-g9rzp:/# vtysh
>
> Hello, this is FRRouting (version 8.5_git).
> Copyright 1996-2005 Kunihiro Ishiguro, et al.
>
> ingress-node-vpp-58dcb69b5f-g9rzp#
> ingress-node-vpp-58dcb69b5f-g9rzp#
> ingress-node-vpp-58dcb69b5f-g9rzp# show ip route
> Codes: K - kernel route, C - connected, S - static, R - RIP,
>O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
>T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
>f - OpenFabric,
>> - selected route, * - FIB route, q - queued, r - rejected, b -
> backup
>t - trapped, o - offload failure
>
> K>* 0.0.0.0/0 [0/0] via 169.254.1.1, eth0, 00:37:00
> C>* 10.0.0.142/32 is directly connected, eth0, 00:37:00
> O   10.0.5.0/24 [110/10] is directly connected, dpdk0, weight 1, 00:37:00
> C>* 10.0.5.0/24 is directly connected, dpdk0, 00:37:00
> *O>* 10.0.9.0/24  [110/40] via 192.168.100.6, gre1,
> weight 1, 00:00:06*
> K>* 169.254.1.1/32 [0/0] is directly connected, eth0, 00:37:00
> O>* 192.168.100.0/30 [110/60] via 192.168.100.6, gre1, weight 1, 00:00:06
> O   192.168.100.4/30 [110/10] is directly connected, gre1, weight 1,
> 00:32:21
> C>* 192.168.100.4/30 is directly connected, gre1, 00:32:21
> O>* 192.168.100.8/30 [110/50] via 192.168.100.6, gre1, weight 1, 00:00:06
> O>* 192.168.100.12/30 [110/20] via 192.168.100.6, gre1, weight 1, 00:32:07
> O>* 192.168.100.16/30 [110/40] via 192.168.100.6, gre1, weight 1, 00:00:06
> O>* 192.168.100.20/30 [110/30] via 192.168.100.6, gre1, weight 1, 00:32:07
>
> ingress-node-vpp-58dcb69b5f-g9rzp:/# ip link set dev gre0 down
> ingress-node-vpp-58dcb69b5f-g9rzp:/#
> ingress-node-vpp-58dcb69b5f-g9rzp:/#
> ingress-node-vpp-58dcb69b5f-g9rzp:/#
> 

[vpp-dev] VPP LCP Route Not Reflecting After Interface State Change

2023-03-26 Thread Christopher Adigun
*Hi,*

*I am facing an issue with the LCP route update when a particular interface
changes state (i.e. when I manually shut it down and when I bring it up).*

*Before shutting interface down, below is the state both in FRR and Linux
table:*

*FRR table:*

ingress-node-vpp-58dcb69b5f-g9rzp:/# vtysh

Hello, this is FRRouting (version 8.5_git).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

ingress-node-vpp-58dcb69b5f-g9rzp# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
   O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
   T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
   f - OpenFabric,
   > - selected route, * - FIB route, q - queued, r - rejected, b -
backup
   t - trapped, o - offload failure

K>* 0.0.0.0/0 [0/0] via 169.254.1.1, eth0, 00:29:12
C>* 10.0.0.142/32 is directly connected, eth0, 00:29:12
O   10.0.5.0/24 [110/10] is directly connected, dpdk0, weight 1, 00:29:12
C>* 10.0.5.0/24 is directly connected, dpdk0, 00:29:12

*O>* 10.0.9.0/24  [110/40] via 192.168.100.2, gre0,
weight 1, 00:24:19  *  via 192.168.100.6, gre1, weight
1, 00:24:19*
K>* 169.254.1.1/32 [0/0] is directly connected, eth0, 00:29:12
O   192.168.100.0/30 [110/10] is directly connected, gre0, weight 1,
00:24:31
C>* 192.168.100.0/30 is directly connected, gre0, 00:24:31
O   192.168.100.4/30 [110/10] is directly connected, gre1, weight 1,
00:24:33
C>* 192.168.100.4/30 is directly connected, gre1, 00:24:33
O>* 192.168.100.8/30 [110/20] via 192.168.100.2, gre0, weight 1, 00:24:19
O>* 192.168.100.12/30 [110/20] via 192.168.100.6, gre1, weight 1, 00:24:19
O>* 192.168.100.16/30 [110/30] via 192.168.100.2, gre0, weight 1, 00:24:19
O>* 192.168.100.20/30 [110/30] via 192.168.100.6, gre1, weight 1, 00:24:19

*Linux Table:*
ingress-node-vpp-58dcb69b5f-g9rzp:/# ip r
default via 169.254.1.1 dev eth0
10.0.5.0/24 dev dpdk0 proto kernel scope link src 10.0.5.16


*10.0.9.0/24  nhid 22 proto ospf metric 20
nexthop via 192.168.100.2 dev gre0 weight 1nexthop via
192.168.100.6 dev gre1 weight 1*
169.254.1.1 dev eth0 scope link
192.168.100.0/30 dev gre0 proto kernel scope link src 192.168.100.1
192.168.100.4/30 dev gre1 proto kernel scope link src 192.168.100.5
192.168.100.8/30 nhid 18 via 192.168.100.2 dev gre0 proto ospf metric 20
192.168.100.12/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
192.168.100.16/30 nhid 18 via 192.168.100.2 dev gre0 proto ospf metric 20
192.168.100.20/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20

*When I manually shut gre0 down, it is removed from the nexthop:*

ingress-node-vpp-58dcb69b5f-g9rzp:/# vtysh

Hello, this is FRRouting (version 8.5_git).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

ingress-node-vpp-58dcb69b5f-g9rzp#
ingress-node-vpp-58dcb69b5f-g9rzp#
ingress-node-vpp-58dcb69b5f-g9rzp# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
   O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
   T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
   f - OpenFabric,
   > - selected route, * - FIB route, q - queued, r - rejected, b -
backup
   t - trapped, o - offload failure

K>* 0.0.0.0/0 [0/0] via 169.254.1.1, eth0, 00:37:00
C>* 10.0.0.142/32 is directly connected, eth0, 00:37:00
O   10.0.5.0/24 [110/10] is directly connected, dpdk0, weight 1, 00:37:00
C>* 10.0.5.0/24 is directly connected, dpdk0, 00:37:00
*O>* 10.0.9.0/24  [110/40] via 192.168.100.6, gre1,
weight 1, 00:00:06*
K>* 169.254.1.1/32 [0/0] is directly connected, eth0, 00:37:00
O>* 192.168.100.0/30 [110/60] via 192.168.100.6, gre1, weight 1, 00:00:06
O   192.168.100.4/30 [110/10] is directly connected, gre1, weight 1,
00:32:21
C>* 192.168.100.4/30 is directly connected, gre1, 00:32:21
O>* 192.168.100.8/30 [110/50] via 192.168.100.6, gre1, weight 1, 00:00:06
O>* 192.168.100.12/30 [110/20] via 192.168.100.6, gre1, weight 1, 00:32:07
O>* 192.168.100.16/30 [110/40] via 192.168.100.6, gre1, weight 1, 00:00:06
O>* 192.168.100.20/30 [110/30] via 192.168.100.6, gre1, weight 1, 00:32:07

ingress-node-vpp-58dcb69b5f-g9rzp:/# ip link set dev gre0 down
ingress-node-vpp-58dcb69b5f-g9rzp:/#
ingress-node-vpp-58dcb69b5f-g9rzp:/#
ingress-node-vpp-58dcb69b5f-g9rzp:/#
ingress-node-vpp-58dcb69b5f-g9rzp:/# ip r
default via 169.254.1.1 dev eth0
10.0.5.0/24 dev dpdk0 proto kernel scope link src 10.0.5.16
*10.0.9.0/24  nhid 23 via 192.168.100.6 dev gre1 proto
ospf metric 20*
169.254.1.1 dev eth0 scope link
192.168.100.0/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
192.168.100.4/30 dev gre1 proto kernel scope link src 192.168.100.5
192.168.100.8/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
192.168.100.12/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
192.168.100.16/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20
192.168.100.20/30 nhid 23 via 192.168.100.6 dev gre1 proto ospf metric 20

*But 

Re: [vpp-dev] VPP not dropping packets with incorrect vlan tags on untagged interface

2023-03-19 Thread Krishna, Parameswaran via lists.fd.io
Hi,
Did anyone get a chance to look at this issue? If anyone has any sort of input, 
that will be of great help.
Please let me know if any additional information is needed. Thank you.

Best regards,
Parameswaran

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22728): https://lists.fd.io/g/vpp-dev/message/22728
Mute This Topic: https://lists.fd.io/mt/97622849/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP not dropping packets with incorrect vlan tags on untagged interface

2023-03-15 Thread Krishna, Parameswaran via lists.fd.io
Hi Experts,

I'm using VPP v22.02.0-26. I have a physical interface 
TwentyFiveGigabitEthernet3/0/0/4096 in bridge-domain 10(untagged) and I have 
configured "l2 efp-filter" on all the interfaces.
I expected that at the  ingress of interface 
TwentyFiveGigabitEthernet3/0/0/4096, only untagged packets or packets with Vlan 
tag 10 will be accepted and packets with any other VLAN tags other than 10 will 
be dropped. But, I observed that a packet with VLAN tag 11 also was accepted 
and it got flooded on bridge-domain 10.

I tried creating a sub-interface with untagged option to see if it would help 
in achieving the expected behavior, but I'm seeing the below error.
DBGvpp# create sub-interfaces TwentyFiveGigabitEthernet3/0/0/4096 10 untagged
create sub-interfaces: vlan is already in use

Is there a way to achieve the behavior I'm expecting ? Please let me know if 
there is a way.
Thanks in advance.

Best regards,
Parameswaran Krishnamurthy

Trace and show outputs
===

DBGvpp# show bridge-domain
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood   Flooding  
ARP-Term  arp-ufwd Learn-co Learn-li   BVI-Intf
   10   1  0 offonon   floodon   
off   off616777216 N/A
   11   2  0 offonon   floodon   
off   off016777216 N/A

DBGvpp# show bridge-domain 10 detail
  BD-ID   Index   BSN  Age(min)  Learning  U-Forwrd   UU-Flood   Flooding  
ARP-Term  arp-ufwd Learn-co Learn-li   BVI-Intf
   10   1  0 offonon   floodon   
off   off616777216 N/A
 SPAN (span-l2-input)
   INPUT_CLASSIFY (l2-input-classify)
   INPUT_FEAT_ARC (l2-input-feat-arc)
 POLICER_CLAS (l2-policer-classify)
  ACL (l2-input-acl)
VPATH (vpath-input-l2)
L2_IP_QOS_RECORD (l2-ip-qos-record)
  VTR (l2-input-vtr)
LEARN (l2-learn)
   RW (l2-rw)
  FWD (l2-fwd)
 UU_FLOOD (l2-flood)
FLOOD (l2-flood)
 XCONNECT (l2-output)

   Interface   If-idx ISN  SHG  BVI  TxFlood
VLAN-Tag-Rewrite
TwentyFiveGigabitEthernet3/0/0   1 10-  * none
TwentyFiveGigabitEthernet3/0/0   2 10-  * none
TwentyFiveGigabitEthernet3/0/0   3 10-  * none
DBGvpp#
DBGvpp# show interface TwentyFiveGigabitEthernet3/0/0/4096
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count
TwentyFiveGigabitEthernet3/0/0/4  3  up  8996/0/0/0 rx packets  
2176
rx bytes
  458059
tx packets  
   28514
tx bytes
 5434852
drops   
   7
DBGvpp# show hardware-interfaces TwentyFiveGigabitEthernet3/0/0/4096
  NameIdx   Link  Hardware
TwentyFiveGigabitEthernet3/0/0/4   3 up   
TwentyFiveGigabitEthernet3/0/0/4096
  Link speed: 10 Gbps
  RX Queues:
queue thread mode
0 main (0)   polling
  Ethernet address 4e:82:65:16:80:c6
  Mellanox ConnectX-4 Family
carrier up full duplex max-frame-size 9018  promisc
flags: admin-up promisc maybe-multiseg tx-offload intel-phdr-cksum 
rx-ip4-cksum
rx: queues 1 (max 1024), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 1024), desc 1024 (min 0 max 65535 align 1)
pci: device 15b3:a2d6 subsystem 15b3:0051 address :03:00.00 numa 0
switch info: name :03:00.0 domain id 0 port id 4096
max rx packet len: 65536
promiscuous: unicast on all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter timestamp rss-hash
   buffer-split
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   outer-ipv4-cksum vxlan-tnl-tso gre-tnl-tso geneve-tnl-tso
   multi-segs mbuf-fast-free udp-tnl-tso ip-tnl-tso
tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv4 ipv6-tcp-ex
   ipv6-udp-ex ipv6-frag ipv6-tcp ipv6-udp ipv6-other
   ipv6-ex ipv6 l4-dst-only l4-src-only l3-dst-only 
l3-src-only
rss active:none
tx burst mode: Enhanced MPW + MULTI + TSO + SWP  + CSUM + METADATA
tx burst function: mlx5_rx_burst
rx burst mode: Scalar
rx burst function: 

[vpp-dev] VPP 23.06 release plan is available

2023-03-14 Thread Andrew Yourtchenko
Hi all,

I’ve prepared the 23.06 release plan - and linked it off the usual place on VPP 
wiki:

https://wiki.fd.io/view/VPP#Get_Involved

Tl;dr: release the last Wednesday of June, RC2 two weeks prior; RC1 three weeks 
prior to RC2. Looks like this schedule works well. Same logic as usual - 
post-RC2 only the fixes for issues found in CSIT, post-RC1 only the fixes + 
specially agreed low risk commits.

This kind of schedule seems to have worked pretty well, so I will keep it, 
unless there’s anyone tells a good reason not to.

Onwards to 23.06! :-)

--a /* your friendly 23.06 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22702): https://lists.fd.io/g/vpp-dev/message/22702
Mute This Topic: https://lists.fd.io/mt/97602729/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp is dropping arp packets #vnet

2023-03-09 Thread Praveen Singh
Hi All,
I have brought up the vpp 20.09 on POD. It seems like vpp is unable to process 
the arp packet from SRIOV VF.
Can you pl suggest what is the fix.
vpp# sh int
Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter          
Count
eth0                              1      up          1500/0/0/0     rx packets  
              534137
rx bytes                56252375
tx packets                   215
tx bytes                    9030
drops                     501332
punt                       32883
ip4                        84787
tx-error                       1
local0                            0

vpp# sh node counters
Count                    Node                  Reason
117421               dpdk-input               no error
416570                arp-reply               IP4 source address not local to 
subnet
2                arp-input               IP4 destination address is unset
66                ip4-glean               ARP requests sent
137                 ip4-arp                ARP requests throttled
149                 ip4-arp                ARP requests sent
1               eth0-output              interface is down
Thanks,
Praveen

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22679): https://lists.fd.io/g/vpp-dev/message/22679
Mute This Topic: https://lists.fd.io/mt/97494874/21656
Mute #vnet:https://lists.fd.io/g/vpp-dev/mutehashtag/vnet
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+nginx #vpp-hoststack

2023-03-06 Thread Florin Coras



nginx.conf
Description: Binary data

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22670): https://lists.fd.io/g/vpp-dev/message/22670
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP IPsec queries

2023-03-01 Thread Zhang, Fan

Hi,

You may try with vpp-swan plugin that makes Strongswan offloading IPsec 
to VPP and keep the IKE part to itself.


The project is still not perfect but takes care of child-SA and 
overlapped subnet.


As of ipip interface instead of ipsec interface - it is correct behavior 
as it allows sharing tunnel implementation between IPsec/wireguard/gso etc.


However vpp-swan did not use the ipip interface feature in VPP due to 
the problem you noticed.


You may find the vpp-swan in vpp/extra/strongswan/vpp_swan.

Regards,

Fan

On 3/1/2023 4:02 AM, Ashish Mittal wrote:

Hi Varun,

Pls find my inputs inline.

Regards

Ashish Mittal

On Sat, 25 Feb, 2023, 11:23 pm Varun Tewari,  
wrote:


Hello Team,

I am new to VPP and probing this technology to build an IPSec
responder for our use-cases.
Our initial tests do show the performance might of VPP.
However on probing this further in depth, I noticed a few
limitations and I am dropping this rider seeking clarification
around these.
All my observations are for VPP 23.02 and am using VPP’s Ikev2
plugin.I am using a linux with strongswan as the peer for my tests.

My observations:

1.
VPP seems doesn’t support multiple child-sa (phase 2 sa, ipsec sa)
within the same tunnel.
Single IPsec SA works fine. An interface ipip0 gets created and
SPD shows the correct binding (show ipsec all).
However ,when I bring up the second child-sa for a different TS, I
se the SPD gets overwritten for the interface and the new child-sa
gets installed overwriting the previous one.
For sure this is leading to traffic drop for the traffic hitting
the first TS.

Q: Is this by design or have I got my config wrong in some way.

Here the quick output from the VPP and strongswan
sudo swanctl --list-sas
net-1: #11, ESTABLISHED, IKEv2, abb046c62a60c38a_i* dc95e079629854ca_r
  local  'roadwarrior.vpn.example.com
' @ 17.17.17.1[500]
  remote 'vpp.home' @ 17.17.17.2[500]
AES_CBC-256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
  established 848s ago, reauth in 84486s
  net-1: #16, reqid 16, INSTALLED, TUNNEL,
ESP:AES_CBC-192/HMAC_SHA1_96/ESN
    installed 848s ago, rekeying in 84690s, expires in 85552s
    in  cec3d263,  24717 bytes,   107 packets, 687s ago
    out a1816d8f, 179718 bytes,   778 packets,   0s ago
    local 16.16.16.0/24 
    remote 18.18.18.0/24 
  net-2: #17, reqid 17, INSTALLED, TUNNEL,
ESP:AES_CBC-192/HMAC_SHA1_96/ESN
    installed 686s ago, rekeying in 84831s, expires in 85714s
    in  cd14add0, 122199 bytes,   529 packets,   2s ago
    out de989d78, 122199 bytes,   529 packets,   2s ago
    local 16.16.15.0/24 
    remote 18.18.18.0/24 

vpp# show ipsec all
[0] sa 2181038080 (0x8200) spi 3468939875 (0xcec3d263)
protocol:esp flags:[esn anti-replay ]
[1] sa 3254779904 (0xc200) spi 2709613967 (0xa1816d8f)
protocol:esp flags:[esn anti-replay inbound ]
[2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0)
protocol:esp flags:[esn anti-replay ]
[3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78)
protocol:esp flags:[esn anti-replay inbound ]
SPD Bindings:
ipip0 flags:[none]
 output-sa:
  [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0)
protocol:esp flags:[esn anti-replay ]
 input-sa:
  [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78)
protocol:esp flags:[esn anti-replay inbound ]
IPSec async mode: off
vpp#

All 4 SAs exist, however the SPD binding shows the latest 2, that
overwrote the SAs for the previous TS leading to traffic drop.

AM=> IKEv2 plugin is experimental state yet. This behaviour that you 
are observing, is unfortunately current implementation. Consider it 
either a bug or simplification done for experimental implementation.




2.
Overlapping subnets between different Ipsec tunnel

When Ikev2 completes, I see, it creates an pip interface and
relevant Child-SAs and ties them to the interface to protect traffic.
So far all is good.
Now, we add an route into VPP to route the traffic via this ipip
interface for each of the source subnet that are expected to be
protected by the tunnel.
This works fine as long as I keep the subnets distinct.

Q: What’s the usual strategy when we have overlapping subnets in
two distinct tunnels ?
T1: SrcSubnet1 DestinationSubnet1
T2: SrcSubnet1 DestinationSubnet2

When T1 is brought up, we add a FIB entry for SrcSubnet1 via
ipipT1 and things works fine.
When T2 comes up, ipipT2 is created and now I need to add FIB
entry for SrcSubnet1 via ipipT2 and as expected things break here.
AM=> I am not sure but it can be done via ABF instead of FIB
entries. I have never tried 

Re: [vpp-dev] VPP IPsec queries

2023-02-28 Thread Ashish Mittal
Hi Varun,

Pls find my inputs inline.

Regards

Ashish Mittal

On Sat, 25 Feb, 2023, 11:23 pm Varun Tewari,  wrote:

> Hello Team,
>
> I am new to VPP and probing this technology to build an IPSec responder
> for our use-cases.
> Our initial tests do show the performance might of VPP.
> However on probing this further in depth, I noticed a few limitations and
> I am dropping this rider seeking clarification around these.
> All my observations are for VPP 23.02 and am using VPP’s Ikev2 plugin.I am
> using a linux with strongswan as the peer for my tests.
>
> My observations:
>
> 1.
> VPP seems doesn’t support multiple child-sa (phase 2 sa, ipsec sa) within
> the same tunnel.
> Single IPsec SA works fine. An interface ipip0 gets created and SPD shows
> the correct binding (show ipsec all).
> However ,when I bring up the second child-sa for a different TS, I se the
> SPD gets overwritten for the interface and the new child-sa gets installed
> overwriting the previous one.
> For sure this is leading to traffic drop for the traffic hitting the first
> TS.
>
> Q: Is this by design or have I got my config wrong in some way.
>
> Here the quick output from the VPP and strongswan
> sudo swanctl --list-sas
> net-1: #11, ESTABLISHED, IKEv2, abb046c62a60c38a_i* dc95e079629854ca_r
>   local  'roadwarrior.vpn.example.com' @ 17.17.17.1[500]
>   remote 'vpp.home' @ 17.17.17.2[500]
>   AES_CBC-256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
>   established 848s ago, reauth in 84486s
>   net-1: #16, reqid 16, INSTALLED, TUNNEL, ESP:AES_CBC-192/HMAC_SHA1_96/ESN
> installed 848s ago, rekeying in 84690s, expires in 85552s
> in  cec3d263,  24717 bytes,   107 packets,   687s ago
> out a1816d8f, 179718 bytes,   778 packets, 0s ago
> local  16.16.16.0/24
> remote 18.18.18.0/24
>   net-2: #17, reqid 17, INSTALLED, TUNNEL, ESP:AES_CBC-192/HMAC_SHA1_96/ESN
> installed 686s ago, rekeying in 84831s, expires in 85714s
> in  cd14add0, 122199 bytes,   529 packets, 2s ago
> out de989d78, 122199 bytes,   529 packets, 2s ago
> local  16.16.15.0/24
> remote 18.18.18.0/24
>
> vpp# show ipsec all
> [0] sa 2181038080 (0x8200) spi 3468939875 (0xcec3d263) protocol:esp
> flags:[esn anti-replay ]
> [1] sa 3254779904 (0xc200) spi 2709613967 (0xa1816d8f) protocol:esp
> flags:[esn anti-replay inbound ]
> [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0) protocol:esp
> flags:[esn anti-replay ]
> [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78) protocol:esp
> flags:[esn anti-replay inbound ]
> SPD Bindings:
> ipip0 flags:[none]
>  output-sa:
>   [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0) protocol:esp
> flags:[esn anti-replay ]
>  input-sa:
>   [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78) protocol:esp
> flags:[esn anti-replay inbound ]
> IPSec async mode: off
> vpp#
>
> All 4 SAs exist, however the SPD binding shows the latest 2, that
> overwrote the SAs for the previous TS leading to traffic drop.
>
AM=> IKEv2 plugin is experimental state yet. This behaviour that you are
observing, is unfortunately current implementation. Consider it either a
bug or simplification done for experimental implementation.

>
>
> 2.
> Overlapping subnets between different Ipsec tunnel
>
> When Ikev2 completes, I see, it creates an pip interface and relevant
> Child-SAs and ties them to the interface to protect traffic.
> So far all is good.
> Now, we add an route into VPP to route the traffic via this ipip interface
> for each of the source subnet that are expected to be protected by the
> tunnel.
> This works fine as long as I keep the subnets distinct.
>
> Q: What’s the usual strategy when we have overlapping subnets in two
> distinct tunnels ?
> T1: SrcSubnet1 DestinationSubnet1
> T2: SrcSubnet1 DestinationSubnet2
>
> When T1 is brought up, we add a FIB entry for SrcSubnet1 via ipipT1 and
> things works fine.
> When T2 comes up, ipipT2 is created and now I need to add FIB entry for
> SrcSubnet1 via ipipT2 and as expected things break here.
> AM=> I am not sure but it can be done via ABF instead of FIB entries. I
> have never tried that but ABF is used for that.
>
> 3.
> IpIp vs Ipsec interface
> For Route based VPP IPsec, I see two options as per the documentation.
> The doc says, Ikev2 will create an ipsec interface, however it creates an
> ipip interface. Is this expected ?
> The interface works okay for me, but wasn’t sure why the difference.
> Further on probing the code, I do see Ikev2 plugin is creating an ipip
> interface not ipsec interface as the doc says.
> AM=> Latest implementation uses ipip tunnel instead of ipsec interface.
> You are looking at older documentation so kindly check latest one.
>
> Thank you in advance for all your comments here.
>
> *शुभ कामनाएँ*,
> Varun Tewari
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22637): https://lists.fd.io/g/vpp-dev/message/22637
Mute This Topic: 

Re: [vpp-dev] VPP IPsec queries

2023-02-28 Thread Mrityunjay Kumar
Can you send your queries in sort mail, it's too long mail.



On Wed, 1 Mar, 2023, 4:43 am Srikanth Akula,  wrote:

> Hi Team,
>
> Any help is appreciated on this topic.
> We are trying to do a POC with IPSec+Ikev plugin of vpp , any
> suggestions/pointers would be helpful in this regard.
>
> Regards,
> Srikanth
>
> On Sat, Feb 25, 2023 at 9:53 AM Varun Tewari 
> wrote:
>
>> Hello Team,
>>
>> I am new to VPP and probing this technology to build an IPSec responder
>> for our use-cases.
>> Our initial tests do show the performance might of VPP.
>> However on probing this further in depth, I noticed a few limitations and
>> I am dropping this rider seeking clarification around these.
>> All my observations are for VPP 23.02 and am using VPP’s Ikev2 plugin.I
>> am using a linux with strongswan as the peer for my tests.
>>
>> My observations:
>>
>> 1.
>> VPP seems doesn’t support multiple child-sa (phase 2 sa, ipsec sa) within
>> the same tunnel.
>> Single IPsec SA works fine. An interface ipip0 gets created and SPD shows
>> the correct binding (show ipsec all).
>> However ,when I bring up the second child-sa for a different TS, I se the
>> SPD gets overwritten for the interface and the new child-sa gets installed
>> overwriting the previous one.
>> For sure this is leading to traffic drop for the traffic hitting the
>> first TS.
>>
>> Q: Is this by design or have I got my config wrong in some way.
>>
>> Here the quick output from the VPP and strongswan
>> sudo swanctl --list-sas
>> net-1: #11, ESTABLISHED, IKEv2, abb046c62a60c38a_i* dc95e079629854ca_r
>>   local  'roadwarrior.vpn.example.com' @ 17.17.17.1[500]
>>   remote 'vpp.home' @ 17.17.17.2[500]
>>   AES_CBC-256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
>>   established 848s ago, reauth in 84486s
>>   net-1: #16, reqid 16, INSTALLED, TUNNEL,
>> ESP:AES_CBC-192/HMAC_SHA1_96/ESN
>> installed 848s ago, rekeying in 84690s, expires in 85552s
>> in  cec3d263,  24717 bytes,   107 packets,   687s ago
>> out a1816d8f, 179718 bytes,   778 packets, 0s ago
>> local  16.16.16.0/24
>> remote 18.18.18.0/24
>>   net-2: #17, reqid 17, INSTALLED, TUNNEL,
>> ESP:AES_CBC-192/HMAC_SHA1_96/ESN
>> installed 686s ago, rekeying in 84831s, expires in 85714s
>> in  cd14add0, 122199 bytes,   529 packets, 2s ago
>> out de989d78, 122199 bytes,   529 packets, 2s ago
>> local  16.16.15.0/24
>> remote 18.18.18.0/24
>>
>> vpp# show ipsec all
>> [0] sa 2181038080 (0x8200) spi 3468939875 (0xcec3d263) protocol:esp
>> flags:[esn anti-replay ]
>> [1] sa 3254779904 (0xc200) spi 2709613967 (0xa1816d8f) protocol:esp
>> flags:[esn anti-replay inbound ]
>> [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0) protocol:esp
>> flags:[esn anti-replay ]
>> [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78) protocol:esp
>> flags:[esn anti-replay inbound ]
>> SPD Bindings:
>> ipip0 flags:[none]
>>  output-sa:
>>   [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0) protocol:esp
>> flags:[esn anti-replay ]
>>  input-sa:
>>   [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78) protocol:esp
>> flags:[esn anti-replay inbound ]
>> IPSec async mode: off
>> vpp#
>>
>> All 4 SAs exist, however the SPD binding shows the latest 2, that
>> overwrote the SAs for the previous TS leading to traffic drop.
>>
>>
>> 2.
>> Overlapping subnets between different Ipsec tunnel
>>
>> When Ikev2 completes, I see, it creates an pip interface and relevant
>> Child-SAs and ties them to the interface to protect traffic.
>> So far all is good.
>> Now, we add an route into VPP to route the traffic via this ipip
>> interface for each of the source subnet that are expected to be protected
>> by the tunnel.
>> This works fine as long as I keep the subnets distinct.
>>
>> Q: What’s the usual strategy when we have overlapping subnets in two
>> distinct tunnels ?
>> T1: SrcSubnet1 DestinationSubnet1
>> T2: SrcSubnet1 DestinationSubnet2
>>
>> When T1 is brought up, we add a FIB entry for SrcSubnet1 via ipipT1 and
>> things works fine.
>> When T2 comes up, ipipT2 is created and now I need to add FIB entry for
>> SrcSubnet1 via ipipT2 and as expected things break here.
>>
>>
>> 3.
>> IpIp vs Ipsec interface
>> For Route based VPP IPsec, I see two options as per the documentation.
>> The doc says, Ikev2 will create an ipsec interface, however it creates an
>> ipip interface. Is this expected ?
>> The interface works okay for me, but wasn’t sure why the difference.
>> Further on probing the code, I do see Ikev2 plugin is creating an ipip
>> interface not ipsec interface as the doc says.
>>
>>
>> Thank you in advance for all your comments here.
>>
>> *शुभ कामनाएँ*,
>> Varun Tewari
>>
>>
>>
>>
>>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22635): https://lists.fd.io/g/vpp-dev/message/22635
Mute This Topic: https://lists.fd.io/mt/97230775/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: 

Re: [vpp-dev] VPP IPsec queries

2023-02-28 Thread Srikanth Akula
Hi Team,

Any help is appreciated on this topic.
We are trying to do a POC with IPSec+Ikev plugin of vpp , any
suggestions/pointers would be helpful in this regard.

Regards,
Srikanth

On Sat, Feb 25, 2023 at 9:53 AM Varun Tewari  wrote:

> Hello Team,
>
> I am new to VPP and probing this technology to build an IPSec responder
> for our use-cases.
> Our initial tests do show the performance might of VPP.
> However on probing this further in depth, I noticed a few limitations and
> I am dropping this rider seeking clarification around these.
> All my observations are for VPP 23.02 and am using VPP’s Ikev2 plugin.I am
> using a linux with strongswan as the peer for my tests.
>
> My observations:
>
> 1.
> VPP seems doesn’t support multiple child-sa (phase 2 sa, ipsec sa) within
> the same tunnel.
> Single IPsec SA works fine. An interface ipip0 gets created and SPD shows
> the correct binding (show ipsec all).
> However ,when I bring up the second child-sa for a different TS, I se the
> SPD gets overwritten for the interface and the new child-sa gets installed
> overwriting the previous one.
> For sure this is leading to traffic drop for the traffic hitting the first
> TS.
>
> Q: Is this by design or have I got my config wrong in some way.
>
> Here the quick output from the VPP and strongswan
> sudo swanctl --list-sas
> net-1: #11, ESTABLISHED, IKEv2, abb046c62a60c38a_i* dc95e079629854ca_r
>   local  'roadwarrior.vpn.example.com' @ 17.17.17.1[500]
>   remote 'vpp.home' @ 17.17.17.2[500]
>   AES_CBC-256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
>   established 848s ago, reauth in 84486s
>   net-1: #16, reqid 16, INSTALLED, TUNNEL, ESP:AES_CBC-192/HMAC_SHA1_96/ESN
> installed 848s ago, rekeying in 84690s, expires in 85552s
> in  cec3d263,  24717 bytes,   107 packets,   687s ago
> out a1816d8f, 179718 bytes,   778 packets, 0s ago
> local  16.16.16.0/24
> remote 18.18.18.0/24
>   net-2: #17, reqid 17, INSTALLED, TUNNEL, ESP:AES_CBC-192/HMAC_SHA1_96/ESN
> installed 686s ago, rekeying in 84831s, expires in 85714s
> in  cd14add0, 122199 bytes,   529 packets, 2s ago
> out de989d78, 122199 bytes,   529 packets, 2s ago
> local  16.16.15.0/24
> remote 18.18.18.0/24
>
> vpp# show ipsec all
> [0] sa 2181038080 (0x8200) spi 3468939875 (0xcec3d263) protocol:esp
> flags:[esn anti-replay ]
> [1] sa 3254779904 (0xc200) spi 2709613967 (0xa1816d8f) protocol:esp
> flags:[esn anti-replay inbound ]
> [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0) protocol:esp
> flags:[esn anti-replay ]
> [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78) protocol:esp
> flags:[esn anti-replay inbound ]
> SPD Bindings:
> ipip0 flags:[none]
>  output-sa:
>   [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0) protocol:esp
> flags:[esn anti-replay ]
>  input-sa:
>   [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78) protocol:esp
> flags:[esn anti-replay inbound ]
> IPSec async mode: off
> vpp#
>
> All 4 SAs exist, however the SPD binding shows the latest 2, that
> overwrote the SAs for the previous TS leading to traffic drop.
>
>
> 2.
> Overlapping subnets between different Ipsec tunnel
>
> When Ikev2 completes, I see, it creates an pip interface and relevant
> Child-SAs and ties them to the interface to protect traffic.
> So far all is good.
> Now, we add an route into VPP to route the traffic via this ipip interface
> for each of the source subnet that are expected to be protected by the
> tunnel.
> This works fine as long as I keep the subnets distinct.
>
> Q: What’s the usual strategy when we have overlapping subnets in two
> distinct tunnels ?
> T1: SrcSubnet1 DestinationSubnet1
> T2: SrcSubnet1 DestinationSubnet2
>
> When T1 is brought up, we add a FIB entry for SrcSubnet1 via ipipT1 and
> things works fine.
> When T2 comes up, ipipT2 is created and now I need to add FIB entry for
> SrcSubnet1 via ipipT2 and as expected things break here.
>
>
> 3.
> IpIp vs Ipsec interface
> For Route based VPP IPsec, I see two options as per the documentation.
> The doc says, Ikev2 will create an ipsec interface, however it creates an
> ipip interface. Is this expected ?
> The interface works okay for me, but wasn’t sure why the difference.
> Further on probing the code, I do see Ikev2 plugin is creating an ipip
> interface not ipsec interface as the doc says.
>
>
> Thank you in advance for all your comments here.
>
> *शुभ कामनाएँ*,
> Varun Tewari
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22634): https://lists.fd.io/g/vpp-dev/message/22634
Mute This Topic: https://lists.fd.io/mt/97230775/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP rx q0 rx buf alloc failure

2023-02-28 Thread sunil kumar
Hi,



We are using dpdk vmxnet3 driver for our use case. We are getting below
errors in show hardware interfaces .



rx q0 rx buf alloc failure 8846166

rx q1 rx buf alloc failure 6245832



Whenever we check “show dpdk buffers” , we see very few mbuf are free.



While “show buffers “ show maximum vlbib_buffers are free.



Based on the code we understand that VPP manages the vlib_buffers which are
mapped to mbuf and allocates almost all mbufs from mempool.Is my
understanding right ?



Is it the reason we are getting this problem. Can anybody please suggest.



Thanks,

Sunil

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22632): https://lists.fd.io/g/vpp-dev/message/22632
Mute This Topic: https://lists.fd.io/mt/97290172/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [vpp-build] VPP build error on Ubuntu 22.04

2023-02-27 Thread Dave Wallace

Thanks for letting us know.
-daw-

On 2/26/2023 8:49 AM, Jens Rösiger via lists.fd.io wrote:

Hi Dave,

i have apply the patch and the build process has no more errors. All 
.deb packages are created.

Thank you very much.

--
Jens




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22628): https://lists.fd.io/g/vpp-dev/message/22628
Mute This Topic: https://lists.fd.io/mt/97192445/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [vpp-build] VPP build error on Ubuntu 22.04

2023-02-26 Thread Jens Rösiger via lists . fd . io
Hi Dave,

i have apply the patch and the build process has no more errors. All .deb 
packages are created.
Thank you very much.

--
Jens

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22627): https://lists.fd.io/g/vpp-dev/message/22627
Mute This Topic: https://lists.fd.io/mt/97192445/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP IPsec queries

2023-02-25 Thread Varun Tewari
Hello Team,

I am new to VPP and probing this technology to build an IPSec responder for our 
use-cases.
Our initial tests do show the performance might of VPP.
However on probing this further in depth, I noticed a few limitations and I am 
dropping this rider seeking clarification around these.
All my observations are for VPP 23.02 and am using VPP’s Ikev2 plugin.I am 
using a linux with strongswan as the peer for my tests.

My observations:

1.
VPP seems doesn’t support multiple child-sa (phase 2 sa, ipsec sa) within the 
same tunnel.
Single IPsec SA works fine. An interface ipip0 gets created and SPD shows the 
correct binding (show ipsec all).
However ,when I bring up the second child-sa for a different TS, I se the SPD 
gets overwritten for the interface and the new child-sa gets installed 
overwriting the previous one.
For sure this is leading to traffic drop for the traffic hitting the first TS.

Q: Is this by design or have I got my config wrong in some way.

Here the quick output from the VPP and strongswan
sudo swanctl --list-sas
net-1: #11, ESTABLISHED, IKEv2, abb046c62a60c38a_i* dc95e079629854ca_r
  local  'roadwarrior.vpn.example.com' @ 17.17.17.1[500]
  remote 'vpp.home' @ 17.17.17.2[500]
  AES_CBC-256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
  established 848s ago, reauth in 84486s
  net-1: #16, reqid 16, INSTALLED, TUNNEL, ESP:AES_CBC-192/HMAC_SHA1_96/ESN
installed 848s ago, rekeying in 84690s, expires in 85552s
in  cec3d263,  24717 bytes,   107 packets,   687s ago
out a1816d8f, 179718 bytes,   778 packets, 0s ago
local  16.16.16.0/24
remote 18.18.18.0/24
  net-2: #17, reqid 17, INSTALLED, TUNNEL, ESP:AES_CBC-192/HMAC_SHA1_96/ESN
installed 686s ago, rekeying in 84831s, expires in 85714s
in  cd14add0, 122199 bytes,   529 packets, 2s ago
out de989d78, 122199 bytes,   529 packets, 2s ago
local  16.16.15.0/24
remote 18.18.18.0/24

vpp# show ipsec all
[0] sa 2181038080 (0x8200) spi 3468939875 (0xcec3d263) protocol:esp 
flags:[esn anti-replay ]
[1] sa 3254779904 (0xc200) spi 2709613967 (0xa1816d8f) protocol:esp 
flags:[esn anti-replay inbound ]
[2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0) protocol:esp 
flags:[esn anti-replay ]
[3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78) protocol:esp 
flags:[esn anti-replay inbound ]
SPD Bindings:
ipip0 flags:[none]
 output-sa:
  [2] sa 2181038081 (0x8201) spi 3440684496 (0xcd14add0) protocol:esp 
flags:[esn anti-replay ]
 input-sa:
  [3] sa 3254779905 (0xc201) spi 3734543736 (0xde989d78) protocol:esp 
flags:[esn anti-replay inbound ]
IPSec async mode: off
vpp#

All 4 SAs exist, however the SPD binding shows the latest 2, that overwrote the 
SAs for the previous TS leading to traffic drop.


2.
Overlapping subnets between different Ipsec tunnel

When Ikev2 completes, I see, it creates an pip interface and relevant Child-SAs 
and ties them to the interface to protect traffic.
So far all is good.
Now, we add an route into VPP to route the traffic via this ipip interface for 
each of the source subnet that are expected to be protected by the tunnel.
This works fine as long as I keep the subnets distinct.

Q: What’s the usual strategy when we have overlapping subnets in two distinct 
tunnels ?
T1: SrcSubnet1 DestinationSubnet1
T2: SrcSubnet1 DestinationSubnet2

When T1 is brought up, we add a FIB entry for SrcSubnet1 via ipipT1 and things 
works fine.
When T2 comes up, ipipT2 is created and now I need to add FIB entry for 
SrcSubnet1 via ipipT2 and as expected things break here.


3.
IpIp vs Ipsec interface
For Route based VPP IPsec, I see two options as per the documentation.
The doc says, Ikev2 will create an ipsec interface, however it creates an ipip 
interface. Is this expected ?
The interface works okay for me, but wasn’t sure why the difference.
Further on probing the code, I do see Ikev2 plugin is creating an ipip 
interface not ipsec interface as the doc says.


Thank you in advance for all your comments here.

शुभ कामनाएँ,
Varun Tewari


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22626): https://lists.fd.io/g/vpp-dev/message/22626
Mute This Topic: https://lists.fd.io/mt/97230775/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Policer API Memory Leak

2023-02-24 Thread mpeim via lists.fd.io
Hi Eugene!

I recently worked on the policer, and you seem right.

In VPP v22.10 (and even in the latest v23.02), a policer's name is used as a 
hashmap's key but is never freed when the policer is deleted.

My recent modifications to the policer should solve this issue. ( 37873: 
policer: API policer selection by index | https://gerrit.fd.io/r/c/vpp/+/37873 
( https://gerrit.fd.io/r/c/vpp/+/37873 ) )

I hope that'll help!

Best regards,
Maxime Peim

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22624): https://lists.fd.io/g/vpp-dev/message/22624
Mute This Topic: https://lists.fd.io/mt/97110527/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Policer API Memory Leak

2023-02-24 Thread efimochkin . e
Hi Stanislav,

 

I repeated test with disabled api trace

 

vpp# show api trace-status

RX Trace disabled

TX Trace disabled

 

vpp# show threads

ID NameTypeLWP Sched Policy (Priority)  lcore  
Core   Socket State

0  vpp_main178961  other (0)n/a
n/an/a

 

Before Test

 

vpp# show memory main-heap verbose

Thread 0 vpp_main

  base 0x7f5f6b4aa000, size 8g, locked, unmap-on-destroy, name 'main heap'

page stats: page-size 4K, total 2097152, mapped 40974, not-mapped 1525072, 
unknown 531106

  numa 0: 18302 pages, 71.49m bytes

  numa 1: 22672 pages, 88.56m bytes

total: 7.99G, used: 36.12M, free: 7.96G, trimmable: 7.96G

  free chunks 150 free fastbin blks 0

  max total allocated 7.99G

 

After several hours of execution

 

vpp# show memory main-heap verbose

Thread 0 vpp_main

  base 0x7f5f6b4aa000, size 8g, locked, unmap-on-destroy, name 'main heap'

page stats: page-size 4K, total 2097152, mapped 563066, not-mapped 1002980, 
unknown 531106

  numa 0: 545742 pages, 2.08g bytes

  numa 1: 17324 pages, 67.67m bytes

total: 7.99G, used: 1.04G, free: 6.96G, trimmable: 6.96G

  free chunks 1 free fastbin blks 0

  max total allocated 7.99G

 

Best Regards, 

Eugene

 

From: vpp-dev@lists.fd.io  On Behalf Of Stanislav Zaikin
Sent: Wednesday, February 22, 2023 11:46 AM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP Policer API Memory Leak

 

Hi Eugene,

 

Could you run again with disabled api trace, wait until let's say 1g is 
consumed and then show us the output of show memory main-heap verbose?

 

On Tue, 21 Feb 2023 at 20:13, mailto:efimochki...@gmail.com> > wrote:

Hi Steven,

 

Thanks for response.

 

I added “nitems 65535” and repeated test. The main-heap usage still growing up.

Also I completely disabled the “api-trace” and nothing changed =(

 

Best Regards, 

Eugene

 

 

From: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>  mailto:vpp-dev@lists.fd.io> > On Behalf Of steven luong via lists.fd.io 
<http://lists.fd.io> 
Sent: Tuesday, February 21, 2023 8:58 PM
To: vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> 
Subject: Re: [vpp-dev] VPP Policer API Memory Leak

 

I bet you didn’t limit the number of API trace entries. Try limit the number of 
API trace entries that VPP keeps with nitems and give it a reasonable number.

 

api-trace {

  on 

nitems 65535

}

 

Steven

 

From: mailto:vpp-dev@lists.fd.io> > on behalf of 
"efimochki...@gmail.com <mailto:efimochki...@gmail.com> " 
mailto:efimochki...@gmail.com> >
Reply-To: "vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> " 
mailto:vpp-dev@lists.fd.io> >
Date: Tuesday, February 21, 2023 at 7:14 AM
To: "vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> " mailto:vpp-dev@lists.fd.io> >
Subject: [vpp-dev] VPP Policer API Memory Leak

 

Hi Dear Developers,

 

I am testing creating and deleting of policers and it looks that there is a 
memory leak

 

VPP Version: v22.10-release

 

 

My simple script:

 

#!/bin/env python

 

from vpp_papi import VPPApiClient

from vpp_papi import VppEnum

import os

import fnmatch

import sys

from time import sleep

 

vpp_json_dir = '/usr/share/vpp/api/'

 

# construct a list of all the json api files

 

jsonfiles = []

 

for root, dirnames, filenames in os.walk(vpp_json_dir):

  for filename in fnmatch.filter(filenames, '*.api.json'):

jsonfiles.append(os.path.join(root, filename))

 

vpp = VPPApiClient(apifiles=jsonfiles, server_address='/run/vpp/api.sock')

vpp.connect("test-client")

 

r = vpp.api.show_version()

print('VPP version is %s' % r.version)

 

while True:

### Create 10 policers

  for i in range (10):

name = "policer_" + str(i)

policer_add_del = vpp.api.policer_add_del(is_add=True, name=name, 
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)

print(policer_add_del)

### Delete 10 policers

  for i in range (10):

name = "policer_" + str(i)

policer_add_del = vpp.api.policer_add_del(is_add=False, name=name, 
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)

print(policer_add_del)

 

The memory usage is growing permanently and very fast. It takes less than 10 
minutes to spend ~ 100Mb of main-heap.

 

vpp# show memory  main-heap

Thread 0 vpp_main

  base 0x7efb0a117000, size 8g, locked, unmap-on-destroy, traced, name 'main 
heap'

page stats: page-size 4K, total 2097152, mapped 116134, not-mapped 1450398, 
unknown 530620

  numa 0: 115788 pages, 452.29m bytes

  numa 1: 346 pages, 1.35m bytes

total: 7.99G, used: 188.26M, free: 7.82G, trimmable: 7.82G

 

  BytesCount Sample   Traceback

  177448814781 0x7efb15d59570 _vec_alloc_internal + 0

[vpp-dev] [vpp-build] VPP build error on Ubuntu 22.04

2023-02-23 Thread Dave Wallace

Jens,

Forwarding to vpp-dev@lists.fd.io where VPP contributors answer these 
types of questions.


The srtp plugin is not built in the CI [0] and ubuntu-22.04 includes 
version 2.4.2 of libsrtp2 [1].  The ekt field was deprecated in 2.4.0 
since it was never fully implemented and the draft changed [2].


I pinged Florin Coras who maintains the srtp_plugin which is 
experimental and verified that it has been intentionally left out of the 
CI.  I have pushed a fix for the build failure [3], but Florin will need 
to test it before it can be merged.


Please apply the patch and let us know if you run into any more issues.

Thanks,
-daw-

[0] 
https://s3-logs.fd.io/vex-yul-rot-jenkins-1/vpp-verify-master-ubuntu2204-x86_64/1178/console-timestamp.log.gz
    (search for libsrtp2.a -> "17:54:54  -- -- libsrtp2.a library not 
found - srtp plugin disabled"

[1] https://pkgs.org/search/?q=libsrtp2
[2] https://github.com/cisco/libsrtp/releases/tag/v2.4.0
[3] https://gerrit.fd.io/r/c/vpp/+/38345

 Forwarded Message 
Subject:[vpp-build] VPP build error on Ubuntu 22.04
Date:   Thu, 23 Feb 2023 03:26:25 -0800
From:   Jens Rösiger via lists.fd.io 
Reply-To:   vpp-bu...@lists.fd.io
To: vpp-bu...@lists.fd.io



Dear VPP Build Team,

i have a problem to build VPP on Ubuntu 22.04(LTS).

 * no errror on "make install-deps"
 * no errors on "make install-ext-deps"
 * but "make build-release"  give this error:

make[1]: Entering directory '/opt/buildpackage/src/vpp/vpp/build-root'
 Arch for platform 'vpp' is native 
 Finding source for external 
 Makefile fragment found in 
/opt/buildpackage/src/vpp/vpp/build-data/packages/external.mk 

 Source found in /opt/buildpackage/src/vpp/vpp/build 
 Arch for platform 'vpp' is native 
 Finding source for vpp 
 Makefile fragment found in 
/opt/buildpackage/src/vpp/vpp/build-data/packages/vpp.mk 

 Source found in /opt/buildpackage/src/vpp/vpp/src 
 Configuring external in 
/opt/buildpackage/src/vpp/vpp/build-root/build-vpp-native/external 
 Building external in 
/opt/buildpackage/src/vpp/vpp/build-root/build-vpp-native/external 

 Installing external 
make[2]: Entering directory '/opt/buildpackage/src/vpp/vpp/build/external'
make check-deb
make[3]: Entering directory '/opt/buildpackage/src/vpp/vpp/build/external'
make[3]: Nothing to be done for 'check-deb'.
make[3]: Leaving directory '/opt/buildpackage/src/vpp/vpp/build/external'
make[2]: Nothing to be done for 'ebuild-install'.
make[2]: Leaving directory '/opt/buildpackage/src/vpp/vpp/build/external'
 Configuring vpp in 
/opt/buildpackage/src/vpp/vpp/build-root/build-vpp-native/vpp 

-- The C compiler identification is Clang 14.0.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/lib/ccache/clang - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Performing Test compiler_flag_march_haswell
-- Performing Test compiler_flag_march_haswell - Success
-- Performing Test compiler_flag_mtune_haswell
-- Performing Test compiler_flag_mtune_haswell - Success
-- Performing Test compiler_flag_march_tremont
-- Performing Test compiler_flag_march_tremont - Success
-- Performing Test compiler_flag_mtune_tremont
-- Performing Test compiler_flag_mtune_tremont - Success
-- Performing Test compiler_flag_march_skylake_avx512
-- Performing Test compiler_flag_march_skylake_avx512 - Success
-- Performing Test compiler_flag_mtune_skylake_avx512
-- Performing Test compiler_flag_mtune_skylake_avx512 - Success
-- Performing Test compiler_flag_mprefer_vector_width_256
-- Performing Test compiler_flag_mprefer_vector_width_256 - Success
-- Performing Test compiler_flag_march_icelake_client
-- Performing Test compiler_flag_march_icelake_client - Success
-- Performing Test compiler_flag_mtune_icelake_client
-- Performing Test compiler_flag_mtune_icelake_client - Success
-- Performing Test compiler_flag_mprefer_vector_width_512
-- Performing Test compiler_flag_mprefer_vector_width_512 - Success
-- Looking for ccache
-- Looking for ccache - found
-- Performing Test compiler_flag_no_address_of_packed_member
-- Performing Test compiler_flag_no_address_of_packed_member - Success
-- Performing Test compiler_flag_no_stringop_overflow
-- Performing Test compiler_flag_no_stringop_overflow - Failed
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Performing Test HAVE_FCNTL64
-- Performing Test HAVE_FCNTL64 - Success
-- Found OpenSSL: /usr/lib/x86_64-linux-gnu/libcrypto.so (found version 
"3.0.2")

-- The ASM compiler identification is Clang with GNU-like command-line
-- Found assembler: /usr/lib/ccache/clang
-- Looking for libuuid
-- Found uuid in /usr/include
-- Found subunit in /usr/include and 

[vpp-dev] VPP 23.02 release is complete!

2023-02-22 Thread Andrew Yourtchenko
Hi all,

VPP release 23.02 is complete ! Artifacts are at their usual place at 
https://packagecloud.io/fdio/release 

Many thanks to all the contributors for their work that went into the release, 
and thanks to Dave Wallace and Vanessa Valderrama for the help in the process !

Onwards to 23.06! :-)

--a /* your friendly 23.02 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22614): https://lists.fd.io/g/vpp-dev/message/22614
Mute This Topic: https://lists.fd.io/mt/97163876/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Policer API Memory Leak

2023-02-22 Thread Stanislav Zaikin
Hi Eugene,

Could you run again with disabled api trace, wait until let's say 1g is
consumed and then show us the output of show memory main-heap verbose?

On Tue, 21 Feb 2023 at 20:13,  wrote:

> Hi Steven,
>
>
>
> Thanks for response.
>
>
>
> I added “nitems 65535” and repeated test. The main-heap usage still
> growing up.
>
> Also I completely disabled the “api-trace” and nothing changed =(
>
>
>
> Best Regards,
>
> Eugene
>
>
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *steven
> luong via lists.fd.io
> *Sent:* Tuesday, February 21, 2023 8:58 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] VPP Policer API Memory Leak
>
>
>
> I bet you didn’t limit the number of API trace entries. Try limit the
> number of API trace entries that VPP keeps with nitems and give it a
> reasonable number.
>
>
>
> api-trace {
>
>   on
>
> nitems 65535
>
> }
>
>
>
> Steven
>
>
>
> *From: * on behalf of "efimochki...@gmail.com" <
> efimochki...@gmail.com>
> *Reply-To: *"vpp-dev@lists.fd.io" 
> *Date: *Tuesday, February 21, 2023 at 7:14 AM
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *[vpp-dev] VPP Policer API Memory Leak
>
>
>
> Hi Dear Developers,
>
>
>
> I am testing creating and deleting of policers and it looks that there is
> a memory leak
>
>
>
> VPP Version: v22.10-release
>
>
>
>
>
> My simple script:
>
>
>
> #!/bin/env python
>
>
>
> from vpp_papi import VPPApiClient
>
> from vpp_papi import VppEnum
>
> import os
>
> import fnmatch
>
> import sys
>
> from time import sleep
>
>
>
> vpp_json_dir = '/usr/share/vpp/api/'
>
>
>
> # construct a list of all the json api files
>
>
>
> jsonfiles = []
>
>
>
> for root, dirnames, filenames in os.walk(vpp_json_dir):
>
>   for filename in fnmatch.filter(filenames, '*.api.json'):
>
> jsonfiles.append(os.path.join(root, filename))
>
>
>
> vpp = VPPApiClient(apifiles=jsonfiles, server_address='/run/vpp/api.sock')
>
> vpp.connect("test-client")
>
>
>
> r = vpp.api.show_version()
>
> print('VPP version is %s' % r.version)
>
>
>
> while True:
>
> ### Create 10 policers
>
>   for i in range (10):
>
> name = "policer_" + str(i)
>
> policer_add_del = vpp.api.policer_add_del(is_add=True, name=name,
> cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)
>
> print(policer_add_del)
>
> ### Delete 10 policers
>
>   for i in range (10):
>
> name = "policer_" + str(i)
>
> policer_add_del = vpp.api.policer_add_del(is_add=False, name=name,
> cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)
>
> print(policer_add_del)
>
>
>
> The memory usage is growing permanently and very fast. It takes less than
> 10 minutes to spend ~ 100Mb of main-heap.
>
>
>
> vpp# show memory  main-heap
>
> Thread 0 vpp_main
>
>   base 0x7efb0a117000, size 8g, locked, unmap-on-destroy, traced, name
> 'main heap'
>
> page stats: page-size 4K, total 2097152, mapped 116134, not-mapped
> 1450398, unknown 530620
>
>   numa 0: 115788 pages, 452.29m bytes
>
>   numa 1: 346 pages, 1.35m bytes
>
> total: 7.99G, used: 188.26M, free: 7.82G, trimmable: 7.82G
>
>
>
>   BytesCount Sample   Traceback
>
>   177448814781 0x7efb15d59570 _vec_alloc_internal + 0x6b
>
>   vl_msg_api_trace + 0x4a4
>
>   vl_msg_api_socket_handler + 0x10f
>
>   vl_socket_process_api_msg + 0x1d
>
>   0x7efd0c177171
>
>   0x7efd0a588837
>
>   0x7efd0a48d6a8
>
>2912721 0x7efb15cf4190 _vec_realloc_internal + 0x89
>
>   vl_msg_api_trace + 0x529
>
>   vl_msg_api_socket_handler + 0x10f
>
>   vl_socket_process_api_msg + 0x1d
>
>   0x7efd0c177171
>
>   0x7efd0a588837
>
>   0x7efd0a48d6a8
>
>178928 7390 0x7efb15d595f0 _vec_alloc_internal + 0x6b
>
>   va_format + 0x2318
>
>   format + 0x83
>
>   0x7efd0a896b

Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down

2023-02-22 Thread sunil kumar
Hi,

Thank You For Your response.

There is one more case. we are observing rx buffer allocation error in
vppctl show hardware output. this error rx buffer allocation failure is
happening in the dpdk vmxnet3_recv_pkts() function.

  rx q0 rx buf alloc failure 8846166
  rx q1 rx buf alloc failure 6245832
  tx q0 drop total 26052
  tx q0 tx ring full 152
  tx q1 drop total18
  tx q1 tx ring full  17


Adding vppctl show buffers and vppctl show dpdk buffers commands output
below.

./bin/vppctl show buffers
Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
 Used
default-numa-0 0 0   2496 2048   537600 527296748
9556


./bin/vppctl show dpdk buffer
name="vpp pool 0"  available = 985 allocated =  536615 total =  537600


We can see from the above two buffers output, vpp allocates all the dpdk
mbuf buffers to vlib_buffers.
Is it the reason we are observing the above rx buffer allocation error?


Thanks,
Sunil

On Sat, Feb 18, 2023 at 12:11 AM Steven Luong (sluong) 
wrote:

> Sunil is using dpdk vmxnet3 driver. So he doesn’t need to load VPP native
> vmxnet3 plugin. Gdb dpdk code to see why it returns -22 when VPP adds the
> NIC to dpdk.
>
> rte_eth_dev_start[port:1, errno:-22]: Unknown error -22
>
>
>
> Steven
>
>
>
> *From: * on behalf of Guangming <
> zhangguangm...@baicells.com>
> *Reply-To: *"vpp-dev@lists.fd.io" 
> *Date: *Friday, February 17, 2023 at 6:55 AM
> *To: *vpp-dev , sunil61090 
> *Subject: *Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down
>
>
>
>
>
> you can  use vppctl show log  to display more  startup meesage.
>
> VMXNET3 need load vmxnet3_plugin.so.
> --
>
> zhangguangm...@baicells.com
>
>
>
> *From:* sunil kumar 
>
> *Date:* 2023-02-17 21:46
>
> *To:* vpp-dev 
>
> *Subject:* [vpp-dev] VPP Hardware Interface Output show Carrier Down
>
> Hi,
>
> We are observing the state of the vpp interface as carrier down
> in  command vppctl show hardware output. This is observed while starting
> the vpp:
>
>
>
> vppctl show hardware output:
>
> ==
>
> device_c/0/0   2down  device_c/0/0
>
>   Link speed: 10 Gbps
>
>   Ethernet address 00:50:56:01:5c:63
>
>   VMware VMXNET3
>
> carrier down
>
> flags: admin-up pmd rx-ip4-cksum
>
> rx: queues 2 (max 16), desc 4096 (min 128 max 4096 align 1)
>
> tx: queues 2 (max 8), desc 4096 (min 512 max 4096 align 1)
>
> pci: device 15ad:07b0 subsystem 15ad:07b0 address :0c:00.00 numa 0
>
> max rx packet len: 16384
>
> promiscuous: unicast off all-multicast off
>
> vlan offload: strip off filter off qinq off
>
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
>
>vlan-filter jumbo-frame scatter
>
> rx offload active: ipv4-cksum
>
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
>
>multi-segs
>
> tx offload active: multi-segs
>
> rss avail: ipv4-tcp ipv4 ipv6-tcp ipv6
>
> rss active:none
>
> tx burst function: vmxnet3_xmit_pkts
>
> rx burst function: vmxnet3_recv_pkts
>
>   Errors:
>
> rte_eth_dev_start[port:1, errno:-22]: Unknown error -22
>
> We are suspecting the following reasons:
> 1) Any issue with vfio-pci driver while unloading and loading again?
> 2) Any corruption is happening during initialization?
>
> I am attaching the startup.conf and vppctl command output files with this
> mail:
>
>
>
> Can anybody suggest a way to resolve this issue?
>
> Thanks,
> Sunil Kumar
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22610): https://lists.fd.io/g/vpp-dev/message/22610
Mute This Topic: https://lists.fd.io/mt/97027473/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Policer API Memory Leak

2023-02-21 Thread efimochkin . e
Hi Steven,

 

Thanks for response.

 

I added “nitems 65535” and repeated test. The main-heap usage still growing up.

Also I completely disabled the “api-trace” and nothing changed =(

 

Best Regards, 

Eugene

 

 

From: vpp-dev@lists.fd.io  On Behalf Of steven luong via 
lists.fd.io
Sent: Tuesday, February 21, 2023 8:58 PM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP Policer API Memory Leak

 

I bet you didn’t limit the number of API trace entries. Try limit the number of 
API trace entries that VPP keeps with nitems and give it a reasonable number.

 

api-trace {

  on 

nitems 65535

}

 

Steven

 

From: mailto:vpp-dev@lists.fd.io> > on behalf of 
"efimochki...@gmail.com <mailto:efimochki...@gmail.com> " 
mailto:efimochki...@gmail.com> >
Reply-To: "vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> " 
mailto:vpp-dev@lists.fd.io> >
Date: Tuesday, February 21, 2023 at 7:14 AM
To: "vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io> " mailto:vpp-dev@lists.fd.io> >
Subject: [vpp-dev] VPP Policer API Memory Leak

 

Hi Dear Developers,

 

I am testing creating and deleting of policers and it looks that there is a 
memory leak

 

VPP Version: v22.10-release

 

 

My simple script:

 

#!/bin/env python

 

from vpp_papi import VPPApiClient

from vpp_papi import VppEnum

import os

import fnmatch

import sys

from time import sleep

 

vpp_json_dir = '/usr/share/vpp/api/'

 

# construct a list of all the json api files

 

jsonfiles = []

 

for root, dirnames, filenames in os.walk(vpp_json_dir):

  for filename in fnmatch.filter(filenames, '*.api.json'):

jsonfiles.append(os.path.join(root, filename))

 

vpp = VPPApiClient(apifiles=jsonfiles, server_address='/run/vpp/api.sock')

vpp.connect("test-client")

 

r = vpp.api.show_version()

print('VPP version is %s' % r.version)

 

while True:

### Create 10 policers

  for i in range (10):

name = "policer_" + str(i)

policer_add_del = vpp.api.policer_add_del(is_add=True, name=name, 
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)

print(policer_add_del)

### Delete 10 policers

  for i in range (10):

name = "policer_" + str(i)

policer_add_del = vpp.api.policer_add_del(is_add=False, name=name, 
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)

print(policer_add_del)

 

The memory usage is growing permanently and very fast. It takes less than 10 
minutes to spend ~ 100Mb of main-heap.

 

vpp# show memory  main-heap

Thread 0 vpp_main

  base 0x7efb0a117000, size 8g, locked, unmap-on-destroy, traced, name 'main 
heap'

page stats: page-size 4K, total 2097152, mapped 116134, not-mapped 1450398, 
unknown 530620

  numa 0: 115788 pages, 452.29m bytes

  numa 1: 346 pages, 1.35m bytes

total: 7.99G, used: 188.26M, free: 7.82G, trimmable: 7.82G

 

  BytesCount Sample   Traceback

  177448814781 0x7efb15d59570 _vec_alloc_internal + 0x6b

  vl_msg_api_trace + 0x4a4

  vl_msg_api_socket_handler + 0x10f

  vl_socket_process_api_msg + 0x1d

  0x7efd0c177171

  0x7efd0a588837

  0x7efd0a48d6a8

   2912721 0x7efb15cf4190 _vec_realloc_internal + 0x89

  vl_msg_api_trace + 0x529

  vl_msg_api_socket_handler + 0x10f

  vl_socket_process_api_msg + 0x1d

  0x7efd0c177171

  0x7efd0a588837

  0x7efd0a48d6a8

   178928 7390 0x7efb15d595f0 _vec_alloc_internal + 0x6b

  va_format + 0x2318

  format + 0x83

  0x7efd0a896b91

  vl_msg_api_socket_handler + 0x226

  vl_socket_process_api_msg + 0x1d

  0x7efd0c177171

  0x7efd0a588837

  0x7efd0a48d6a8

858001 0x7efb135ca840 _vec_realloc_internal + 0x89

  vl_socket_api_send + 0x720

  vl_api_sockclnt_create_t_handler + 0x2e2

  vl_msg_api_socket_handler + 0x226

  vl_socket_process_api_msg + 0x1d

  0x7efd0c177171

  0x7efd0a588837

  0x7efd0a48d6a8

 41041 0x7efb13dcf220 _vec_alloc_internal + 0x6b

  0x7efd0a5e09

Re: [vpp-dev] VPP Policer API Memory Leak

2023-02-21 Thread steven luong via lists.fd.io
I bet you didn’t limit the number of API trace entries. Try limit the number of 
API trace entries that VPP keeps with nitems and give it a reasonable number.

api-trace {
  on
nitems 65535
}

Steven

From:  on behalf of "efimochki...@gmail.com" 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Tuesday, February 21, 2023 at 7:14 AM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] VPP Policer API Memory Leak

Hi Dear Developers,

I am testing creating and deleting of policers and it looks that there is a 
memory leak

VPP Version: v22.10-release


My simple script:

#!/bin/env python

from vpp_papi import VPPApiClient
from vpp_papi import VppEnum
import os
import fnmatch
import sys
from time import sleep

vpp_json_dir = '/usr/share/vpp/api/'

# construct a list of all the json api files

jsonfiles = []

for root, dirnames, filenames in os.walk(vpp_json_dir):
  for filename in fnmatch.filter(filenames, '*.api.json'):
jsonfiles.append(os.path.join(root, filename))

vpp = VPPApiClient(apifiles=jsonfiles, server_address='/run/vpp/api.sock')
vpp.connect("test-client")

r = vpp.api.show_version()
print('VPP version is %s' % r.version)

while True:
### Create 10 policers
  for i in range (10):
name = "policer_" + str(i)
policer_add_del = vpp.api.policer_add_del(is_add=True, name=name, 
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)
print(policer_add_del)
### Delete 10 policers
  for i in range (10):
name = "policer_" + str(i)
policer_add_del = vpp.api.policer_add_del(is_add=False, name=name, 
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)
print(policer_add_del)

The memory usage is growing permanently and very fast. It takes less than 10 
minutes to spend ~ 100Mb of main-heap.

vpp# show memory  main-heap
Thread 0 vpp_main
  base 0x7efb0a117000, size 8g, locked, unmap-on-destroy, traced, name 'main 
heap'
page stats: page-size 4K, total 2097152, mapped 116134, not-mapped 1450398, 
unknown 530620
  numa 0: 115788 pages, 452.29m bytes
  numa 1: 346 pages, 1.35m bytes
total: 7.99G, used: 188.26M, free: 7.82G, trimmable: 7.82G

  BytesCount Sample   Traceback
  177448814781 0x7efb15d59570 _vec_alloc_internal + 0x6b
  vl_msg_api_trace + 0x4a4
  vl_msg_api_socket_handler + 0x10f
  vl_socket_process_api_msg + 0x1d
  0x7efd0c177171
  0x7efd0a588837
  0x7efd0a48d6a8
   2912721 0x7efb15cf4190 _vec_realloc_internal + 0x89
  vl_msg_api_trace + 0x529
  vl_msg_api_socket_handler + 0x10f
  vl_socket_process_api_msg + 0x1d
  0x7efd0c177171
  0x7efd0a588837
  0x7efd0a48d6a8
   178928 7390 0x7efb15d595f0 _vec_alloc_internal + 0x6b
  va_format + 0x2318
  format + 0x83
  0x7efd0a896b91
  vl_msg_api_socket_handler + 0x226
  vl_socket_process_api_msg + 0x1d
  0x7efd0c177171
  0x7efd0a588837
  0x7efd0a48d6a8
858001 0x7efb135ca840 _vec_realloc_internal + 0x89
  vl_socket_api_send + 0x720
  vl_api_sockclnt_create_t_handler + 0x2e2
  vl_msg_api_socket_handler + 0x226
  vl_socket_process_api_msg + 0x1d
  0x7efd0c177171
  0x7efd0a588837
  0x7efd0a48d6a8
 41041 0x7efb13dcf220 _vec_alloc_internal + 0x6b
  0x7efd0a5e0965
  0x7efd0a5f05c4
  0x7efd0a584978
  0x7efd0a5845f5
  0x7efd0a5f213b
  0x7efd0a48d6a8
 1920   16 0x7efb13e62a40 _vec_realloc_internal + 0x89
  0x7efd0a482d1d
  va_format + 0xf62
  format + 0x83
  va_format + 0x1041
  format + 0x83
  va_format + 0x1041
  vlib_log + 0x2c6
  0x7efb08b033aa
  0x7efb08b031c9
  0x7efb08b0cc6d
   

[vpp-dev] VPP Policer API Memory Leak

2023-02-21 Thread efimochkin . e
Hi Dear Developers,

 

I am testing creating and deleting of policers and it looks that there is a
memory leak

 

VPP Version: v22.10-release

 

 

My simple script:

 

#!/bin/env python

 

from vpp_papi import VPPApiClient

from vpp_papi import VppEnum

import os

import fnmatch

import sys

from time import sleep

 

vpp_json_dir = '/usr/share/vpp/api/'

 

# construct a list of all the json api files

 

jsonfiles = []

 

for root, dirnames, filenames in os.walk(vpp_json_dir):

  for filename in fnmatch.filter(filenames, '*.api.json'):

jsonfiles.append(os.path.join(root, filename))

 

vpp = VPPApiClient(apifiles=jsonfiles, server_address='/run/vpp/api.sock')

vpp.connect("test-client")

 

r = vpp.api.show_version()

print('VPP version is %s' % r.version)

 

while True:

### Create 10 policers

  for i in range (10):

name = "policer_" + str(i)

policer_add_del = vpp.api.policer_add_del(is_add=True, name=name,
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)

print(policer_add_del)

### Delete 10 policers

  for i in range (10):

name = "policer_" + str(i)

policer_add_del = vpp.api.policer_add_del(is_add=False, name=name,
cb=2500,cir=1000, eb=3000,eir=0,rate_type=0,round_type=1,type=1)

print(policer_add_del)

 

The memory usage is growing permanently and very fast. It takes less than 10
minutes to spend ~ 100Mb of main-heap.

 

vpp# show memory  main-heap

Thread 0 vpp_main

  base 0x7efb0a117000, size 8g, locked, unmap-on-destroy, traced, name 'main
heap'

page stats: page-size 4K, total 2097152, mapped 116134, not-mapped
1450398, unknown 530620

  numa 0: 115788 pages, 452.29m bytes

  numa 1: 346 pages, 1.35m bytes

total: 7.99G, used: 188.26M, free: 7.82G, trimmable: 7.82G

 

  BytesCount Sample   Traceback

  177448814781 0x7efb15d59570 _vec_alloc_internal + 0x6b

  vl_msg_api_trace + 0x4a4

  vl_msg_api_socket_handler + 0x10f

  vl_socket_process_api_msg + 0x1d

  0x7efd0c177171

  0x7efd0a588837

  0x7efd0a48d6a8

   2912721 0x7efb15cf4190 _vec_realloc_internal + 0x89

  vl_msg_api_trace + 0x529

  vl_msg_api_socket_handler + 0x10f

  vl_socket_process_api_msg + 0x1d

  0x7efd0c177171

  0x7efd0a588837

  0x7efd0a48d6a8

   178928 7390 0x7efb15d595f0 _vec_alloc_internal + 0x6b

  va_format + 0x2318

  format + 0x83

  0x7efd0a896b91

  vl_msg_api_socket_handler + 0x226

  vl_socket_process_api_msg + 0x1d

  0x7efd0c177171

  0x7efd0a588837

  0x7efd0a48d6a8

858001 0x7efb135ca840 _vec_realloc_internal + 0x89

  vl_socket_api_send + 0x720

  vl_api_sockclnt_create_t_handler + 0x2e2

  vl_msg_api_socket_handler + 0x226

  vl_socket_process_api_msg + 0x1d

  0x7efd0c177171

  0x7efd0a588837

  0x7efd0a48d6a8

 41041 0x7efb13dcf220 _vec_alloc_internal + 0x6b

  0x7efd0a5e0965

  0x7efd0a5f05c4

  0x7efd0a584978

  0x7efd0a5845f5

  0x7efd0a5f213b

  0x7efd0a48d6a8

 1920   16 0x7efb13e62a40 _vec_realloc_internal + 0x89

  0x7efd0a482d1d

  va_format + 0xf62

  format + 0x83

  va_format + 0x1041

  format + 0x83

  va_format + 0x1041

  vlib_log + 0x2c6

  0x7efb08b033aa

  0x7efb08b031c9

  0x7efb08b0cc6d

  0x7efb08b988ee

 

vpp# show memory main-heap verbose

Thread 0 vpp_main

  base 0x7efb0a117000, size 8g, locked, unmap-on-destroy, traced, name 'main
heap'

page stats: page-size 4K, total 2097152, mapped 170152, not-mapped
1396380, unknown 530620

  numa 0: 169806 pages, 663.30m bytes

  numa 1: 346 pages, 1.35m bytes

total: 7.99G, used: 289.51M, free: 7.72G, 

Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down

2023-02-17 Thread steven luong via lists.fd.io
Sunil is using dpdk vmxnet3 driver. So he doesn’t need to load VPP native 
vmxnet3 plugin. Gdb dpdk code to see why it returns -22 when VPP adds the NIC 
to dpdk.

rte_eth_dev_start[port:1, errno:-22]: Unknown error -22

Steven

From:  on behalf of Guangming 
Reply-To: "vpp-dev@lists.fd.io" 
Date: Friday, February 17, 2023 at 6:55 AM
To: vpp-dev , sunil61090 
Subject: Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down


you can  use vppctl show log  to display more  startup meesage.
VMXNET3 need load vmxnet3_plugin.so.

zhangguangm...@baicells.com

From: sunil kumar<mailto:sunil61...@gmail.com>
Date: 2023-02-17 21:46
To: vpp-dev<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] VPP Hardware Interface Output show Carrier Down
Hi,

We are observing the state of the vpp interface as carrier down in  command 
vppctl show hardware output. This is observed while starting the vpp:

vppctl show hardware output:
==
device_c/0/0   2down  device_c/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:01:5c:63
  VMware VMXNET3
carrier down
flags: admin-up pmd rx-ip4-cksum
rx: queues 2 (max 16), desc 4096 (min 128 max 4096 align 1)
tx: queues 2 (max 8), desc 4096 (min 512 max 4096 align 1)
pci: device 15ad:07b0 subsystem 15ad:07b0 address :0c:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4 ipv6-tcp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts
  Errors:
rte_eth_dev_start[port:1, errno:-22]: Unknown error -22

We are suspecting the following reasons:
1) Any issue with vfio-pci driver while unloading and loading again?
2) Any corruption is happening during initialization?

I am attaching the startup.conf and vppctl command output files with this mail:

Can anybody suggest a way to resolve this issue?

Thanks,
Sunil Kumar

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22600): https://lists.fd.io/g/vpp-dev/message/22600
Mute This Topic: https://lists.fd.io/mt/97027473/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Hardware Interface Output show Carrier Down

2023-02-17 Thread Guangming

you can  use vppctl show log  to display more  startup meesage.  
VMXNET3 need load vmxnet3_plugin.so.


zhangguangm...@baicells.com
 
From: sunil kumar
Date: 2023-02-17 21:46
To: vpp-dev
Subject: [vpp-dev] VPP Hardware Interface Output show Carrier Down
Hi,

We are observing the state of the vpp interface as carrier down in  command 
vppctl show hardware output. This is observed while starting the vpp:

vppctl show hardware output:
==
device_c/0/0   2down  device_c/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:01:5c:63
  VMware VMXNET3
carrier down 
flags: admin-up pmd rx-ip4-cksum
rx: queues 2 (max 16), desc 4096 (min 128 max 4096 align 1)
tx: queues 2 (max 8), desc 4096 (min 512 max 4096 align 1)
pci: device 15ad:07b0 subsystem 15ad:07b0 address :0c:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro 
   vlan-filter jumbo-frame scatter 
rx offload active: ipv4-cksum 
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso 
   multi-segs 
tx offload active: multi-segs 
rss avail: ipv4-tcp ipv4 ipv6-tcp ipv6 
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts
  Errors:
rte_eth_dev_start[port:1, errno:-22]: Unknown error -22

We are suspecting the following reasons:
1) Any issue with vfio-pci driver while unloading and loading again?
2) Any corruption is happening during initialization?

I am attaching the startup.conf and vppctl command output files with this mail:

Can anybody suggest a way to resolve this issue?

Thanks,
Sunil Kumar

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22598): https://lists.fd.io/g/vpp-dev/message/22598
Mute This Topic: https://lists.fd.io/mt/97027473/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP Hardware Interface Output show Carrier Down

2023-02-17 Thread sunil kumar
Hi,

We are observing the state of the vpp interface as carrier down in  command
vppctl show hardware output. This is observed while starting the vpp:

vppctl show hardware output:
==
device_c/0/0   2down  device_c/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:01:5c:63
  VMware VMXNET3
carrier down
flags: admin-up pmd rx-ip4-cksum
rx: queues 2 (max 16), desc 4096 (min 128 max 4096 align 1)
tx: queues 2 (max 8), desc 4096 (min 512 max 4096 align 1)
pci: device 15ad:07b0 subsystem 15ad:07b0 address :0c:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro
   vlan-filter jumbo-frame scatter
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso
   multi-segs
tx offload active: multi-segs
rss avail: ipv4-tcp ipv4 ipv6-tcp ipv6
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts
  Errors:
rte_eth_dev_start[port:1, errno:-22]: Unknown error -22

We are suspecting the following reasons:
1) Any issue with vfio-pci driver while unloading and loading again?
2) Any corruption is happening during initialization?

I am attaching the startup.conf and vppctl command output files with this
mail:

Can anybody suggest a way to resolve this issue?

Thanks,
Sunil Kumar
mdp-msp83-fe09$ 
mdp-msp83-fe09$ /opt/opwv/integra/8.3//tools/vpp/bin/vppctl show int
  Name   IdxState  MTU (L3/IP4/IP6/MPLS) 
Counter  Count 
device_4/0/0  1  up  2048/0/0/0 rx packets  
24095317
rx bytes
  1535008978
tx packets  
 1318199
tx bytes
   106821246
punt
  93
ip4 
  512937
ip6 
  80
device_c/0/0  2  up  1500/0/0/0 tx-error
   69337
kni-0 3  up  9000/0/0/0 rx packets  
 1318198
rx bytes
   106821204
tx packets  
21528958
tx bytes
  1312254784
tx-error
   1
kni-1 4  up  9000/0/0/0 rx packets  
   69336
rx bytes
 4160280
local00 down  0/0/0/0   drops   
 2668474




mdp-msp83-fe09$ 
mdp-msp83-fe09$ /opt/opwv/integra/8.3//tools/vpp/bin/vppctl show ha
  NameIdx   Link  Hardware
device_4/0/0   1 up   device_4/0/0
  Link speed: 10 Gbps
  Ethernet address 00:50:56:01:5c:62
  VMware VMXNET3
carrier up full duplex mtu 2048 
flags: admin-up pmd rx-ip4-cksum
rx: queues 2 (max 16), desc 4096 (min 128 max 4096 align 1)
tx: queues 2 (max 8), desc 4096 (min 512 max 4096 align 1)
pci: device 15ad:07b0 subsystem 15ad:07b0 address :04:00.00 numa 0
max rx packet len: 16384
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum tcp-lro 
   vlan-filter jumbo-frame scatter 
rx offload active: ipv4-cksum 
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso 
   multi-segs 
tx offload active: multi-segs 
rss avail: ipv4-tcp ipv4 ipv6-tcp ipv6 
rss active:none
tx burst function: vmxnet3_xmit_pkts
rx burst function: vmxnet3_recv_pkts

tx frames ok 1318205
tx bytes ok106821714
rx frames ok24095448
rx bytes ok   1535017138
extended stats:
  rx good packets   24095448
  tx good packets1318205
  rx good bytes   1535017138
  

[vpp-dev] vpp stripped binary failed to start on icelake 8380 platform with ubuntu 22.04

2023-02-14 Thread Pei, Yulong
Hi vpp-dev and csit-dev,

Do you guys bumped into below issue on icelake 8380 platform with ubuntu 22.04 ?

1.  make build-release; make  pkg-deb

Built vpp from vpp repo master branch, the latest commit id as below,

commit 590a82c237337f560cc3d5beac47a235c5e97eac
Author: Tianyu Li tianyu...@arm.com
Date:   Sat Jan 28 07:58:45 2023 +

build: add missing dependences for centos 8

2. install vpp from deb packages,  here vpp binary were stripped,   running 
failed as below,

# /usr/bin/vpp -c /etc/vpp/startup.conf
unix_config:472: couldn't open log '/var/log/vpp/vpp.log'
vpp[1076183]: clib_socket_init: unknown input `(nil)/snort.sock'

#file  /usr/bin/vpp
/usr/bin/vpp: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), 
dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, 
BuildID[sha1]=9e31d428a67f39d3c523ecd6fadfc45e058e029d, for GNU/Linux 3.2.0, 
stripped


3.  vpp manage to run as below, also can connect by vppctl

#cd  vpp/build-root/install-vpp-native/vpp/bin
#./vpp -c /etc/vpp/startup.conf  &
#./vppctl
_____   _  ___
__/ __/ _ \  (_)__| | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /(_)_/\___/   |___/_/  /_/

vpp# show version
vpp v23.06-rc0~61-g590a82c23 built by root on fdio-ICX1 at 2023-02-14T14:02:50

#file ./vpp
./vpp: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically 
linked, interpreter /lib64/ld-linux-x86-64.so.2, 
BuildID[sha1]=f7df044e5b3b084240f5bed0cab3ec360057b9d4, for GNU/Linux 3.2.0, 
with debug_info, not stripped

Note: here startup.conf file was the default on from deb package.

Best Regards
Yulong Pei


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22594): https://lists.fd.io/g/vpp-dev/message/22594
Mute This Topic: https://lists.fd.io/mt/96960944/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-02-08 Thread Gennady Abramov
Hello Stanislav,

Sorry for late answer, I was temporary deprived of my testbed :-)

So, applied patches:
1. Last patch attached to this thread
2. Patch from Gerrit from above.

Looks everything working now. OSI/IS-IS packets are passed to CP, routing 
protocol working.
Issue with VPP crash after .1q subif via auto-subint disappeared.

Issue with spontaneous host interface down when lcp lcp-sync enabled also was 
never repeated, but I am not sure if it is related to is-is or not. Colleagues 
reported it was seen on unpatched version, but this was not confirmed and since 
that OS kernel was changed.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22568): https://lists.fd.io/g/vpp-dev/message/22568
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 23.02 RC2 milestone is done !

2023-02-08 Thread Andrew Yourtchenko
Hi all,

The VPP RC2 milestone is done, the RC2 artifacts are available from the 
packagecloud repository at https://packagecloud.io/fdio/2302

Now we accept only the fixes from CSIT testing in preparation for the release, 
which is scheduled to happen in 2 weeks from now.

--a /* your friendly 23.02 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22567): https://lists.fd.io/g/vpp-dev/message/22567
Mute This Topic: https://lists.fd.io/mt/96835634/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP logging does not logs API calls debug message

2023-02-08 Thread Tripathi, VinayX
Hi Steven,

Can I have your input please ?
Do we have way to redirect STDIO output to /var/log/vpp.log


Thanks
Vinay

From: Tripathi, VinayX
Sent: Monday, February 6, 2023 5:02 PM
To: vpp-dev@lists.fd.io
Cc: Ji, Kai 
Subject: RE: [vpp-dev] VPP logging does not logs API calls debug message

Hi Steven,

Show log : shows debug messages on console. But does not redirect to vpp.log 
file.
Only VPP CLI command has been triggered to configure that only appears in 
vpp.log
Even packet trace appears on console , but don’t in vpp.log .

create ipip tunnel src 10.128.0.9 dst 10.192.0.9
set interface state ipip2 up
ipsec sa add 4 spi 4 crypto-alg aes-gcm-128 crypto-key 
31323334353637383930313233343536 salt 0x31323334
ipsec sa add 5 spi 5 crypto-alg aes-gcm-128 crypto-key 
31323334353637383930313233343536 salt 0x31323334
ipsec tunnel protect ipip2 sa-in 4 sa-out 5
set int ip address ipip2 10.128.0.9/30
ip route add 10.64.0.8/30 via ipip2
ip route add 10.192.0.8/30 via 192.168.11.3 eth1

ip route add 20.64.0.0/10 via 192.168.10.2 eth0

set int state eth0 up
set int state eth1 up

sh ipsec sa 02023/02/06 09:08:55:400: * End Startup Config *
2023/02/06 09:09:00:239[0]: 2023/02/06 09:09:01:575[0]: 2023/02/06 
09:09:02:139[0]: 2023/02/06 09:09:12:451[0]: trace add dpdk-input 20


/tmp/vpp_dut1.log






Thanks
Vinay

From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of steven luong via 
lists.fd.io
Sent: Saturday, February 4, 2023 8:44 PM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Cc: Ji, Kai mailto:kai...@intel.com>>
Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message

Did you try
vppctl show log

Steven

From: mailto:vpp-dev@lists.fd.io>> on behalf of "Tripathi, 
VinayX" mailto:vinayx.tripa...@intel.com>>
Reply-To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Date: Saturday, February 4, 2023 at 4:19 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Cc: "Ji, Kai" mailto:kai...@intel.com>>
Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message

Hi Team ,
Any suggestion would highly appreciable.

Thanks
Vinay

From: Tripathi, VinayX
Sent: Friday, February 3, 2023 6:28 PM
To: 'vpp-dev@lists.fd.io' mailto:vpp-dev@lists.fd.io>>
Cc: Ji, Kai mailto:kai...@intel.com>>
Subject: VPP logging does not logs API calls debug message


Hi Team,

I have notice that VPP infra/plugin/node/driver  related debug message does not 
logs into /var/log/vpp/vpp.vpp.log
Only logs CLI command being trigger from VPP console. Please find below 
configuration used.
Kindly suggest if I’m missing any configuration.

Using VPP version :- vpp v23.02-

unix {
   interactive
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /var/log/vpp/cli.sock
   cli-pager-buffer-limit 1
   # cli-listen localhost:5002
   #exec /root/vinaytrx/vpp/dpdk-pmd.bash
}

api-trace {
  on
}
logging {
   default-syslog-log-level debug
   default-log-level debug
  # class dpdk/cryptodev { rate-limit 100 level debug syslog-level error }
}

Log messages from /var/log/vpp/vpp.log
2023/01/31 08:11:53:331[0]: show interface
2023/01/31 08:12:08:100[0]: set int ip address eth0 192.168.1.0/30
2023/01/31 08:14:07:757[0]: ipsec
2023/01/31 08:14:10:946[0]: ipsec ?
2023/01/31 08:15:04:185[0]: create interface ?
2023/01/31 08:19:46:385[0]: create host-interface ?
2023/01/31 08:38:07:979[0]: set ip ?
2023/01/31 08:44:08:455[0]: set interface ip ?
2023/01/31 08:59:29:253[0]: show interface '

Thanks
Vinay

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22566): https://lists.fd.io/g/vpp-dev/message/22566
Mute This Topic: https://lists.fd.io/mt/96721810/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 22.03 RC2 milestone tomorrow Wednesday 8 February 12:00 UTC

2023-02-07 Thread Andrew Yourtchenko
Hi all,

Just a kind reminder the RC2 milestone is tomorrow 12:00 UTC. After that on 
stable/2302 branch we will be accepting only the fixes to issues found by CSIT 
in preparation for the release. 

Thanks a lot!

--a /* your friendly 23.02 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22562): https://lists.fd.io/g/vpp-dev/message/22562
Mute This Topic: https://lists.fd.io/mt/96813088/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP logging does not logs API calls debug message

2023-02-06 Thread Tripathi, VinayX
Hi Steven,

Show log : shows debug messages on console. But does not redirect to vpp.log 
file.
Only VPP CLI command has been triggered to configure that only appears in 
vpp.log
Even packet trace appears on console , but don’t in vpp.log .

create ipip tunnel src 10.128.0.9 dst 10.192.0.9
set interface state ipip2 up
ipsec sa add 4 spi 4 crypto-alg aes-gcm-128 crypto-key 
31323334353637383930313233343536 salt 0x31323334
ipsec sa add 5 spi 5 crypto-alg aes-gcm-128 crypto-key 
31323334353637383930313233343536 salt 0x31323334
ipsec tunnel protect ipip2 sa-in 4 sa-out 5
set int ip address ipip2 10.128.0.9/30
ip route add 10.64.0.8/30 via ipip2
ip route add 10.192.0.8/30 via 192.168.11.3 eth1

ip route add 20.64.0.0/10 via 192.168.10.2 eth0

set int state eth0 up
set int state eth1 up

sh ipsec sa 02023/02/06 09:08:55:400: * End Startup Config *
2023/02/06 09:09:00:239[0]: 2023/02/06 09:09:01:575[0]: 2023/02/06 
09:09:02:139[0]: 2023/02/06 09:09:12:451[0]: trace add dpdk-input 20


/tmp/vpp_dut1.log






Thanks
Vinay

From: vpp-dev@lists.fd.io  On Behalf Of steven luong via 
lists.fd.io
Sent: Saturday, February 4, 2023 8:44 PM
To: vpp-dev@lists.fd.io
Cc: Ji, Kai 
Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message

Did you try
vppctl show log

Steven

From: mailto:vpp-dev@lists.fd.io>> on behalf of "Tripathi, 
VinayX" mailto:vinayx.tripa...@intel.com>>
Reply-To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Date: Saturday, February 4, 2023 at 4:19 AM
To: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
mailto:vpp-dev@lists.fd.io>>
Cc: "Ji, Kai" mailto:kai...@intel.com>>
Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message

Hi Team ,
Any suggestion would highly appreciable.

Thanks
Vinay

From: Tripathi, VinayX
Sent: Friday, February 3, 2023 6:28 PM
To: 'vpp-dev@lists.fd.io' mailto:vpp-dev@lists.fd.io>>
Cc: Ji, Kai mailto:kai...@intel.com>>
Subject: VPP logging does not logs API calls debug message


Hi Team,

I have notice that VPP infra/plugin/node/driver  related debug message does not 
logs into /var/log/vpp/vpp.vpp.log
Only logs CLI command being trigger from VPP console. Please find below 
configuration used.
Kindly suggest if I’m missing any configuration.

Using VPP version :- vpp v23.02-

unix {
   interactive
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /var/log/vpp/cli.sock
   cli-pager-buffer-limit 1
   # cli-listen localhost:5002
   #exec /root/vinaytrx/vpp/dpdk-pmd.bash
}

api-trace {
  on
}
logging {
   default-syslog-log-level debug
   default-log-level debug
  # class dpdk/cryptodev { rate-limit 100 level debug syslog-level error }
}

Log messages from /var/log/vpp/vpp.log
2023/01/31 08:11:53:331[0]: show interface
2023/01/31 08:12:08:100[0]: set int ip address eth0 192.168.1.0/30
2023/01/31 08:14:07:757[0]: ipsec
2023/01/31 08:14:10:946[0]: ipsec ?
2023/01/31 08:15:04:185[0]: create interface ?
2023/01/31 08:19:46:385[0]: create host-interface ?
2023/01/31 08:38:07:979[0]: set ip ?
2023/01/31 08:44:08:455[0]: set interface ip ?
2023/01/31 08:59:29:253[0]: show interface '

Thanks
Vinay

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22548): https://lists.fd.io/g/vpp-dev/message/22548
Mute This Topic: https://lists.fd.io/mt/96721810/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 23.02 RC2 milestone Wednesday 8 February 2023 12:00 UTC

2023-02-06 Thread Andrew Yourtchenko
Hi all,

Just a kind reminder that RC2 milestone will happen in two days at noon UTC as 
per our release plan [0]; after that only the fixes to issues found in CSIT 
will be accepted, in preparation for the release.

[0] https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_23.02

--a /* your friendly 23.02 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22547): https://lists.fd.io/g/vpp-dev/message/22547
Mute This Topic: https://lists.fd.io/mt/96780520/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP logging does not logs API calls debug message

2023-02-04 Thread steven luong via lists.fd.io
Did you try
vppctl show log

Steven

From:  on behalf of "Tripathi, VinayX" 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Saturday, February 4, 2023 at 4:19 AM
To: "vpp-dev@lists.fd.io" 
Cc: "Ji, Kai" 
Subject: Re: [vpp-dev] VPP logging does not logs API calls debug message

Hi Team ,
Any suggestion would highly appreciable.

Thanks
Vinay

From: Tripathi, VinayX
Sent: Friday, February 3, 2023 6:28 PM
To: 'vpp-dev@lists.fd.io' 
Cc: Ji, Kai 
Subject: VPP logging does not logs API calls debug message


Hi Team,

I have notice that VPP infra/plugin/node/driver  related debug message does not 
logs into /var/log/vpp/vpp.vpp.log
Only logs CLI command being trigger from VPP console. Please find below 
configuration used.
Kindly suggest if I’m missing any configuration.

Using VPP version :- vpp v23.02-

unix {
   interactive
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /var/log/vpp/cli.sock
   cli-pager-buffer-limit 1
   # cli-listen localhost:5002
   #exec /root/vinaytrx/vpp/dpdk-pmd.bash
}

api-trace {
  on
}
logging {
   default-syslog-log-level debug
   default-log-level debug
  # class dpdk/cryptodev { rate-limit 100 level debug syslog-level error }
}

Log messages from /var/log/vpp/vpp.log
2023/01/31 08:11:53:331[0]: show interface
2023/01/31 08:12:08:100[0]: set int ip address eth0 192.168.1.0/30
2023/01/31 08:14:07:757[0]: ipsec
2023/01/31 08:14:10:946[0]: ipsec ?
2023/01/31 08:15:04:185[0]: create interface ?
2023/01/31 08:19:46:385[0]: create host-interface ?
2023/01/31 08:38:07:979[0]: set ip ?
2023/01/31 08:44:08:455[0]: set interface ip ?
2023/01/31 08:59:29:253[0]: show interface '

Thanks
Vinay

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22545): https://lists.fd.io/g/vpp-dev/message/22545
Mute This Topic: https://lists.fd.io/mt/96721810/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP logging does not logs API calls debug message

2023-02-04 Thread Tripathi, VinayX
Hi Team ,
Any suggestion would highly appreciable.

Thanks
Vinay

From: Tripathi, VinayX
Sent: Friday, February 3, 2023 6:28 PM
To: 'vpp-dev@lists.fd.io' 
Cc: Ji, Kai 
Subject: VPP logging does not logs API calls debug message


Hi Team,

I have notice that VPP infra/plugin/node/driver  related debug message does not 
logs into /var/log/vpp/vpp.vpp.log
Only logs CLI command being trigger from VPP console. Please find below 
configuration used.
Kindly suggest if I'm missing any configuration.

Using VPP version :- vpp v23.02-

unix {
   interactive
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /var/log/vpp/cli.sock
   cli-pager-buffer-limit 1
   # cli-listen localhost:5002
   #exec /root/vinaytrx/vpp/dpdk-pmd.bash
}

api-trace {
  on
}
logging {
   default-syslog-log-level debug
   default-log-level debug
  # class dpdk/cryptodev { rate-limit 100 level debug syslog-level error }
}

Log messages from /var/log/vpp/vpp.log
2023/01/31 08:11:53:331[0]: show interface
2023/01/31 08:12:08:100[0]: set int ip address eth0 192.168.1.0/30
2023/01/31 08:14:07:757[0]: ipsec
2023/01/31 08:14:10:946[0]: ipsec ?
2023/01/31 08:15:04:185[0]: create interface ?
2023/01/31 08:19:46:385[0]: create host-interface ?
2023/01/31 08:38:07:979[0]: set ip ?
2023/01/31 08:44:08:455[0]: set interface ip ?
2023/01/31 08:59:29:253[0]: show interface '

Thanks
Vinay
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22544): https://lists.fd.io/g/vpp-dev/message/22544
Mute This Topic: https://lists.fd.io/mt/96721810/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP logging does not logs API calls debug message

2023-02-03 Thread Tripathi, VinayX

Hi Team,

I have notice that VPP infra/plugin/node/driver  related debug message does not 
logs into /var/log/vpp/vpp.vpp.log
Only logs CLI command being trigger from VPP console. Please find below 
configuration used.
Kindly suggest if I'm missing any configuration.

Using VPP version :- vpp v23.02-

unix {
   interactive
   log /var/log/vpp/vpp.log
   full-coredump
   cli-listen /var/log/vpp/cli.sock
   cli-pager-buffer-limit 1
   # cli-listen localhost:5002
   #exec /root/vinaytrx/vpp/dpdk-pmd.bash
}

api-trace {
  on
}
logging {
   default-syslog-log-level debug
   default-log-level debug
  # class dpdk/cryptodev { rate-limit 100 level debug syslog-level error }
}

Log messages from /var/log/vpp/vpp.log
2023/01/31 08:11:53:331[0]: show interface
2023/01/31 08:12:08:100[0]: set int ip address eth0 192.168.1.0/30
2023/01/31 08:14:07:757[0]: ipsec
2023/01/31 08:14:10:946[0]: ipsec ?
2023/01/31 08:15:04:185[0]: create interface ?
2023/01/31 08:19:46:385[0]: create host-interface ?
2023/01/31 08:38:07:979[0]: set ip ?
2023/01/31 08:44:08:455[0]: set interface ip ?
2023/01/31 08:59:29:253[0]: show interface '

Thanks
Vinay
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22541): https://lists.fd.io/g/vpp-dev/message/22541
Mute This Topic: https://lists.fd.io/mt/96721810/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-02-02 Thread Stanislav Zaikin
Hello Gennady,

Could you try again with this patch[0]?

[0] - https://gerrit.fd.io/r/c/vpp/+/38118

On Wed, 1 Feb 2023 at 19:38, Gennady Abramov  wrote:

> Hello Stanislav,
>
> Here is it!
>
> Thread 1 "vpp_main" received signal SIGABRT, Aborted.
> __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> 50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> (gdb)
> (gdb) f 9
> #9  0x7fffaf404b6a in lcp_router_link_add (rl=0x5a38e0,
> ctx=0x7fffbb9add18) at
> /home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_router.c:423
> 423   if (vnet_create_sub_interface
> (lip->lip_host_sw_if_index, vlan, 18,
> (gdb) info locals
> lip = 0x7fffbb9abd70
> if_name = 0x0
> sub_phy_sw_if_index = 3
> sub_host_sw_if_index = 8
> vlan = 1914
> ns = 0x0
> if_namev = 0xf90b93202e1d7baf  0xf90b93202e1d7baf>
> lipi = 0
> up = 0
> vnm = 0x77f696e8 
> (gdb) p lipi
> $1 = 0
> (gdb) p *lip
> $2 = {lip_host_sw_if_index = 1745050366, lip_phy_sw_if_index = 32767,
> lip_host_name = 0x43 ,
> lip_vif_index = 1,
>   lip_namespace = 0x50005 "", lip_host_type = (LCP_ITF_HOST_TUN |
> unknown: 4294967294), lip_phy_adjs = {adj_index = {0, 0}}, lip_flags =
> (unknown: 0),
>   lip_rewrite_len = 0 '\000', lip_create_ts = 0}
> (gdb)
> 
>
>

-- 
Best regards
Stanislav Zaikin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22537): https://lists.fd.io/g/vpp-dev/message/22537
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-02-01 Thread Gennady Abramov
Hello Stanislav,

Here is it!

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb)
(gdb) f 9
#9  0x7fffaf404b6a in lcp_router_link_add (rl=0x5a38e0, ctx=0x7fffbb9add18) 
at /home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_router.c:423
423   if (vnet_create_sub_interface (lip->lip_host_sw_if_index, 
vlan, 18,
(gdb) info locals
lip = 0x7fffbb9abd70
if_name = 0x0
sub_phy_sw_if_index = 3
sub_host_sw_if_index = 8
vlan = 1914
ns = 0x0
if_namev = 0xf90b93202e1d7baf 
lipi = 0
up = 0
vnm = 0x77f696e8 
(gdb) p lipi
$1 = 0
(gdb) p *lip
$2 = {lip_host_sw_if_index = 1745050366, lip_phy_sw_if_index = 32767, 
lip_host_name = 0x43 , 
lip_vif_index = 1,
lip_namespace = 0x50005 "", lip_host_type = (LCP_ITF_HOST_TUN | unknown: 
4294967294), lip_phy_adjs = {adj_index = {0, 0}}, lip_flags = (unknown: 0),
lip_rewrite_len = 0 '\000', lip_create_ts = 0}
(gdb)

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22534): https://lists.fd.io/g/vpp-dev/message/22534
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-02-01 Thread Stanislav Zaikin
Hi Gennady,

Could you execute the following commands in gdb and show the output?
f 9
info locals
p lipi
p *lip

On Tue, 31 Jan 2023 at 11:16, Gennady Abramov  wrote:

> Hello Stanislav,
>
> The is-is itself is working, thank you!
> tn3# show isis neighbor
> Area myisis:
>   System Id   Interface   L  StateHoldtime SNPA
>  tn1 Ten0.1914   3  Up28   2020.2020.2020
>
> Unfortunately, both lcp lcp-auto-subint and lcp lcp-sync still looks
> broken. Note, I've applied your patches to 22.10 version as master branch
> was not stable enough; so if it is needed, I can also test on master.
> 1. LCP auto-subint:
> DBGvpp# set interface state TenGigabitEthernet1c/0/1 up
> DBGvpp# lcp lcp-auto-subint on
> DBGvpp# lcp lcp-
> lcp-auto-subint  lcp-sync
> DBGvpp# lcp lcp-sync on
> DBGvpp# lcp create 1 host-if Ten0
> DBGvpp# show lcp
> lcp default netns ''
> lcp lcp-auto-subint on
> lcp lcp-sync on
> lcp del-static-on-link-down off
> lcp del-dynamic-on-link-down off
> itf-pair: [0] TenGigabitEthernet1c/0/1 tap1 Ten0 1304 type tap
> DBGvpp#
> Then VPP crashes:
>
> Jan 31 10:05:57 tn3 vnet[1233293]:
> /home/abramov/vpp-p3-lcp/src/vnet/interface_funcs.h:60
> (vnet_get_sw_interface) assertion `! pool_is_free
> (vnm->interface_main.sw_interfaces, _e)' fails
> Jan 31 10:05:57 tn3 systemd-udevd[1233343]: ethtool: autonegotiation is
> unset or enabled, the speed and duplex are not writable.
> Jan 31 10:05:57 tn3 vnet[1233293]: received signal SIGABRT, PC
> 0x7f81e45b800b
> Jan 31 10:05:57 tn3 systemd-udevd[1233343]: Using default interface naming
> scheme 'v245'.
> Jan 31 10:05:57 tn3 vnet[1233293]: #0  0x7f81e4ab1c92
> unix_signal_handler + 0x1f2
> Jan 31 10:05:57 tn3 vnet[1233293]: #1  0x7f81e49af420 0x7f81e49af420
> Jan 31 10:05:57 tn3 vnet[1233293]: #2  0x7f81e45b800b gsignal + 0xcb
> Jan 31 10:05:57 tn3 vnet[1233293]: #3  0x7f81e4597859 abort + 0x12b
> Jan 31 10:05:57 tn3 vnet[1233293]: #4  0x004072f3 0x4072f3
> Jan 31 10:05:57 tn3 vnet[1233293]: #5  0x7f81e48e9109 debugger + 0x9
> Jan 31 10:05:57 tn3 vnet[1233293]: #6  0x7f81e48e8eca _clib_error +
> 0x2da
> Jan 31 10:05:57 tn3 vnet[1233293]: #7  0x7f81e4c94f68
> vnet_get_sw_interface + 0xa8
> Jan 31 10:05:57 tn3 vnet[1233293]: #8  0x7f81e4c94f9b
> vnet_get_sup_sw_interface + 0x1b
> Jan 31 10:05:57 tn3 vnet[1233293]: #9  0x7f81e4c9500b
> vnet_get_sup_hw_interface + 0x1b
> Jan 31 10:05:57 tn3 vnet[1233293]: #10 0x7f81e4c98bca
> vnet_create_sub_interface + 0x5a
> Jan 31 10:05:57 tn3 vnet[1233293]: #11 0x7f819ce8db6a
> lcp_router_link_add + 0x5ea
> Jan 31 10:05:57 tn3 vnet[1233293]: #12 0x7f819ce98fc3 nl_link_add +
> 0xd3
> Jan 31 10:05:57 tn3 vnet[1233293]: #13 0x7f819ce986a0
> nl_route_dispatch + 0xe0
> Jan 31 10:05:57 tn3 vnet[1233293]: #14 0x7f819cf29f52 0x7f819cf29f52
>
>
> Thread 1 "vpp_main" received signal SIGABRT, Aborted.
> __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> 50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> (gdb) bt
> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> #1  0x76b0c859 in __GI_abort () at abort.c:79
> #2  0x004072f3 in os_panic () at
> /home/abramov/vpp-p3-lcp/src/vpp/vnet/main.c:417
> #3  0x76e5e109 in debugger () at
> /home/abramov/vpp-p3-lcp/src/vppinfra/error.c:84
> #4  0x76e5deca in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x77cc8208 "%s:%d (%s) assertion `%s' fails") at
> /home/abramov/vpp-p3-lcp/src/vppinfra/error.c:143
> #5  0x7720af68 in vnet_get_sw_interface (vnm=0x77f696e8
> , sw_if_index=1650550633) at
> /home/abramov/vpp-p3-lcp/src/vnet/interface_funcs.h:60
> #6  0x7720af9b in vnet_get_sup_sw_interface (vnm=0x77f696e8
> , sw_if_index=1650550633) at
> /home/abramov/vpp-p3-lcp/src/vnet/interface_funcs.h:83
> #7  0x7720b00b in vnet_get_sup_hw_interface (vnm=0x77f696e8
> , sw_if_index=1650550633) at
> /home/abramov/vpp-p3-lcp/src/vnet/interface_funcs.h:94
> #8  0x7720ebca in vnet_create_sub_interface
> (sw_if_index=1650550633, id=1914, flags=18, inner_vlan_id=0,
> outer_vlan_id=1914, sub_sw_if_index=0x7fffac7ad980) at
> /home/abramov/vpp-p3-lcp/src/vnet/ethernet/interface.c:1063
> #9  0x7fffaf404b6a in lcp_router_link_add (rl=0x5a3450,
> ctx=0x7fffbb9b18c8) at
> /home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_router.c:423
> #10 0x7fffaf40ffc3 in nl_link_add (rl=0x5a3450, arg=0x7fffbb9b18c8) at
> /home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_nl.c:280
> #11 0x7fffaf40f6a0 in nl_route_dispatch (obj=0x5a3450,
> arg=0x7fffbb9b18c8) at
> /home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_nl.c:323
> #12 0x7fffaf4a0f52 in ?? () from /lib/x86_64-linux-gnu/libnl-3.so.200
> #13 0x7fffaf441990 in ?? () from
> /lib/x86_64-linux-gnu/libnl-route-3.so.200
> #14 0x7fffaf49db52 in nl_cache_parse () from
> 

Re: [vpp-dev] vpp+nginx #vpp-hoststack

2023-01-31 Thread first_semon
the version that I using like this :

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22529): https://lists.fd.io/g/vpp-dev/message/22529
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP: LX2160a failed to start - dpaa2_dev_rx_queue_setup

2023-01-31 Thread Gennady Abramov
Hello,

Some investigations about that.

dpaa2_dev_rx_queue_setup receives *mb_pool, which is rte_mempool:

dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t rx_queue_id,
uint16_t nb_rx_desc,
unsigned int socket_id __rte_unused,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool)

rte_mempool has union, between pointer to pool_data or uint_64 pool_id:
struct rte_mempool {
char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
RTE_STD_C11
union {
void *pool_data; /**< Ring or pool to store objects. */
uint64_t pool_id;    /**< External mempool identifier. */
};

dpdk considers that pool_data is feed:

build-root/build-vpp_debug-native/external/src-dpdk/drivers/mempool/dpaa2/dpaa2_hw_mempool.h
#define mempool_to_bpinfo(mp) ((struct dpaa2_bp_info *)(mp)->pool_data)
#define mempool_to_bpid(mp) ((mempool_to_bpinfo(mp))->bpid)

On the other hand, VPP considers that union field to be bpid:
mp->pool_id = nmp->pool_id = bp->index;

So, when dpdk_dev_rx_queue_setup refers to pool_data, VPP crashes:

And in dpaa2_dev_rx_queue_setup:
if (!priv->bp_list || priv->bp_list->mp != mb_pool) {
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
ret = rte_dpaa2_bpid_info_init(mb_pool);
if (ret)
return ret;
}
bpid = mempool_to_bpid(mb_pool);

The NXP patch, which fixes an issue, looks really dirty and obfuscated, and it 
damages memory management at all, although dpaa2 initialization is fixed, vpp 
crashes during work with any type of interfaces, incl. pcie igbs connected to 
board. 
https://source.codeaurora.org/external/qoriq/qoriq-components/vpp/commit/?h=21.08-LSDK=d36e7d1e4fd75695a96b3f339700b83fa8018619

So it looks like we need to fix it by ourselves.
Unfortunately, I still not very familiar with both dpdk and vpp memory 
management and interaction between them :-(

So, any advice where I need to dig to fix, for the beginning, that union fields 
mismatch?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22528): https://lists.fd.io/g/vpp-dev/message/22528
Mute This Topic: https://lists.fd.io/mt/96374294/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-31 Thread Gennady Abramov
Hello Stanislav,

The is-is itself is working, thank you!
tn3# show isis neighbor
Area myisis:
System Id   Interface   L  State    Holdtime SNPA
tn1 Ten0.1914   3  Up    28   2020.2020.2020

Unfortunately, both lcp lcp-auto-subint and lcp lcp-sync still looks broken. 
Note, I've applied your patches to 22.10 version as master branch was not 
stable enough; so if it is needed, I can also test on master.
1. LCP auto-subint:
DBGvpp# set interface state TenGigabitEthernet1c/0/1 up
DBGvpp# lcp lcp-auto-subint on
DBGvpp# lcp lcp-
lcp-auto-subint  lcp-sync
DBGvpp# lcp lcp-sync on
DBGvpp# lcp create 1 host-if Ten0
DBGvpp# show lcp
lcp default netns ''
lcp lcp-auto-subint on
lcp lcp-sync on
lcp del-static-on-link-down off
lcp del-dynamic-on-link-down off
itf-pair: [0] TenGigabitEthernet1c/0/1 tap1 Ten0 1304 type tap
DBGvpp#
Then VPP crashes:

Jan 31 10:05:57 tn3 vnet[1233293]: 
/home/abramov/vpp-p3-lcp/src/vnet/interface_funcs.h:60 (vnet_get_sw_interface) 
assertion `! pool_is_free (vnm->interface_main.sw_interfaces, _e)' fails
Jan 31 10:05:57 tn3 systemd-udevd[1233343]: ethtool: autonegotiation is unset 
or enabled, the speed and duplex are not writable.
Jan 31 10:05:57 tn3 vnet[1233293]: received signal SIGABRT, PC 0x7f81e45b800b
Jan 31 10:05:57 tn3 systemd-udevd[1233343]: Using default interface naming 
scheme 'v245'.
Jan 31 10:05:57 tn3 vnet[1233293]: #0  0x7f81e4ab1c92 unix_signal_handler + 
0x1f2
Jan 31 10:05:57 tn3 vnet[1233293]: #1  0x7f81e49af420 0x7f81e49af420
Jan 31 10:05:57 tn3 vnet[1233293]: #2  0x7f81e45b800b gsignal + 0xcb
Jan 31 10:05:57 tn3 vnet[1233293]: #3  0x7f81e4597859 abort + 0x12b
Jan 31 10:05:57 tn3 vnet[1233293]: #4  0x004072f3 0x4072f3
Jan 31 10:05:57 tn3 vnet[1233293]: #5  0x7f81e48e9109 debugger + 0x9
Jan 31 10:05:57 tn3 vnet[1233293]: #6  0x7f81e48e8eca _clib_error + 0x2da
Jan 31 10:05:57 tn3 vnet[1233293]: #7  0x7f81e4c94f68 vnet_get_sw_interface 
+ 0xa8
Jan 31 10:05:57 tn3 vnet[1233293]: #8  0x7f81e4c94f9b 
vnet_get_sup_sw_interface + 0x1b
Jan 31 10:05:57 tn3 vnet[1233293]: #9  0x7f81e4c9500b 
vnet_get_sup_hw_interface + 0x1b
Jan 31 10:05:57 tn3 vnet[1233293]: #10 0x7f81e4c98bca 
vnet_create_sub_interface + 0x5a
Jan 31 10:05:57 tn3 vnet[1233293]: #11 0x7f819ce8db6a lcp_router_link_add + 
0x5ea
Jan 31 10:05:57 tn3 vnet[1233293]: #12 0x7f819ce98fc3 nl_link_add + 0xd3
Jan 31 10:05:57 tn3 vnet[1233293]: #13 0x7f819ce986a0 nl_route_dispatch + 
0xe0
Jan 31 10:05:57 tn3 vnet[1233293]: #14 0x7f819cf29f52 0x7f819cf29f52

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x76b0c859 in __GI_abort () at abort.c:79
#2  0x004072f3 in os_panic () at 
/home/abramov/vpp-p3-lcp/src/vpp/vnet/main.c:417
#3  0x76e5e109 in debugger () at 
/home/abramov/vpp-p3-lcp/src/vppinfra/error.c:84
#4  0x76e5deca in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x77cc8208 "%s:%d (%s) assertion `%s' fails") at 
/home/abramov/vpp-p3-lcp/src/vppinfra/error.c:143
#5  0x7720af68 in vnet_get_sw_interface (vnm=0x77f696e8 
, sw_if_index=1650550633) at 
/home/abramov/vpp-p3-lcp/src/vnet/interface_funcs.h:60
#6  0x7720af9b in vnet_get_sup_sw_interface (vnm=0x77f696e8 
, sw_if_index=1650550633) at 
/home/abramov/vpp-p3-lcp/src/vnet/interface_funcs.h:83
#7  0x7720b00b in vnet_get_sup_hw_interface (vnm=0x77f696e8 
, sw_if_index=1650550633) at 
/home/abramov/vpp-p3-lcp/src/vnet/interface_funcs.h:94
#8  0x7720ebca in vnet_create_sub_interface (sw_if_index=1650550633, 
id=1914, flags=18, inner_vlan_id=0, outer_vlan_id=1914, 
sub_sw_if_index=0x7fffac7ad980) at 
/home/abramov/vpp-p3-lcp/src/vnet/ethernet/interface.c:1063
#9  0x7fffaf404b6a in lcp_router_link_add (rl=0x5a3450, ctx=0x7fffbb9b18c8) 
at /home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_router.c:423
#10 0x7fffaf40ffc3 in nl_link_add (rl=0x5a3450, arg=0x7fffbb9b18c8) at 
/home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_nl.c:280
#11 0x7fffaf40f6a0 in nl_route_dispatch (obj=0x5a3450, arg=0x7fffbb9b18c8) 
at /home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_nl.c:323
#12 0x7fffaf4a0f52 in ?? () from /lib/x86_64-linux-gnu/libnl-3.so.200
#13 0x7fffaf441990 in ?? () from /lib/x86_64-linux-gnu/libnl-route-3.so.200
#14 0x7fffaf49db52 in nl_cache_parse () from 
/lib/x86_64-linux-gnu/libnl-3.so.200
#15 0x7fffaf4a2984 in nl_msg_parse () from 
/lib/x86_64-linux-gnu/libnl-3.so.200
#16 0x7fffaf40c4c4 in nl_route_process_msgs () at 
/home/abramov/vpp-p3-lcp/src/plugins/linux-cp/lcp_nl.c:344
#17 0x7fffaf40b721 in nl_route_process (vm=0x7fffb6ae8740, 
node=0x7fffb8794380, frame=0x0) at 

Re: [vpp-dev] vpp+nginx #vpp-hoststack

2023-01-30 Thread first_semon
ok,I  also try out the patch .The topology like this:
the attachment is configure of reverse proxy with nginx/VPP


nginx.conf
Description: Binary data

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22526): https://lists.fd.io/g/vpp-dev/message/22526
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+nginx #vpp-hoststack

2023-01-30 Thread Florin Coras
Hi, 

Could you provide a simplified description of your topology and a bare bones 
nginx config? We could try to repro this in the hs-test infra we’ve been 
recently developing. See here [1]. 

Also, could you also try out this patch [2] I’ve been toying with recently to 
see if it improves anything?

Regards,
Florin

[1] https://git.fd.io/vpp/tree/extras/hs-test
[2] https://gerrit.fd.io/r/c/vpp/+/38080

> On Jan 30, 2023, at 5:55 PM, first_se...@163.com wrote:
> 
> I use tool of wrk like this :wrk -c 20 -t 10 -d 40 http://172.30.4.23:80 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22525): https://lists.fd.io/g/vpp-dev/message/22525
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+nginx #vpp-hoststack

2023-01-30 Thread first_semon
I use tool of wrk like this :wrk -c 20 -t 10 -d 40 http://172.30.4.23:80

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22524): https://lists.fd.io/g/vpp-dev/message/22524
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+nginx #vpp-hoststack

2023-01-30 Thread first_semon
yes,I use fixed ip but destination port are different.like this:bk,bk1 and bk2 
all are 192.168.171.123 . destination port like this :8080,8081,8082. when I 
use tool of wrk ,the sessions were opened may be having 6-10w+.

otherwise,I have some logic in nginx about mirror module.not for testing!

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22523): https://lists.fd.io/g/vpp-dev/message/22523
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+nginx #vpp-hoststack

2023-01-30 Thread Florin Coras
Hi, 

I’m guessing you’re running out of ports on connections from nginx/vpp to the 
actual server, since you’re using fixed ips and a fixed destination port? Check 
how many sessions you have opened with “show session”.

Out of curiosity, what are you using mirroring for? Testing? 

Regards,
Florin

> On Jan 30, 2023, at 1:05 AM, first_se...@163.com wrote:
> 
> when the reverse proxy not config mirror this issue is not exitsed.but when 
> reverse proxy of nginx use the mirror module ,it is  occured and the 
> configure like this
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22522): https://lists.fd.io/g/vpp-dev/message/22522
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+nginx #vpp-hoststack

2023-01-30 Thread first_semon
when the reverse proxy not config mirror this issue is not exitsed.but when
reverse proxy of nginx use the mirror module ,it is  occured and the configure 
like this

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22519): https://lists.fd.io/g/vpp-dev/message/22519
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+dpdk

2023-01-30 Thread first_semon
I solved this issue

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22517): https://lists.fd.io/g/vpp-dev/message/22517
Mute This Topic: https://lists.fd.io/mt/95640416/21656
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp+nginx #vpp-hoststack

2023-01-30 Thread first_semon
When I test a web server with a reverse proxy which  running vpp-hoststack with 
WRK, the session of vpp tell me: Jan 30 15:17:57 localhost vnet[8165]: 
session_mq_connect_one:196: connect returned: no lcl port
what should I do to solve the issue? I don`t know how to setting when I want to 
set the socket port recycle fast.please help me.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22516): https://lists.fd.io/g/vpp-dev/message/22516
Mute This Topic: https://lists.fd.io/mt/96623842/21656
Mute #vpp-hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp-hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-27 Thread Stanislav Zaikin
I really hoped that there's a better and shorter solution, but I couldn't
find it.
You can try with this ugly patch with a (lot of) copy-paste from
lip_punt_node :)

On Thu, 26 Jan 2023 at 14:14,  wrote:

> Hi Stanislav,
>
> Situation is better now, but still only half of the problem solved :-)
> OSI IS-IS packets passed from network to tap, but not passed from tap to
> network.
> These are on TAP interface, where:78 is VPP-based router, :7a is non-VPP
> peer, both directions are seen:
> 13:03:22.911887 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500:
> LLC, dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI
> NLPID IS-IS (0x83): p2p IIH, src-id ..0001, length 1497
> 13:03:23.433773 3c:ec:ef:5f:77:8f > 09:00:2b:00:00:05, 802.3, length 1500:
> LLC, dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI
> NLPID IS-IS (0x83): p2p IIH, src-id ..0001, length 1497
> ,
> These are on opposite side of link, (Linux IS-IS router without VPP), only
> outgoing packets are seen:
> 13:08:54.796588 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500:
> LLC, dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI
> NLPID IS-IS (0x83): p2p IIH, src-id ..0001, length 1497
> 13:08:57.662629 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500:
> LLC, dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI
> NLPID IS-IS (0x83): p2p IIH, src-id ..0001, length 1497
>
>
> Also, it looks like lcp auto-subint is broken ; VPP aborts on ip link add
>  on TAP device, instead of creating subif. I'll provide back
> trace later on
>
> Jan 26 12:57:04 tn3 vnet[1133419]: unix_signal_handler:191: received
> signal SIGWINCH, PC 0x7fdd59a34f41
> Jan 26 12:57:11 tn3 vnet[1133419]: received signal SIGWINCH, PC
> 0x7fdd59a34f41
> Jan 26 12:57:11 tn3 vnet[1133419]: #0  0x7fdd59a95c92
> unix_signal_handler + 0x1f2
> Jan 26 12:57:11 tn3 vnet[1133419]: #1  0x7fdd59993420 0x7fdd59993420
> Jan 26 12:57:11 tn3 vnet[1133419]: #2  0x7fdd5a5b8f00
> virtio_refill_vring_split + 0x60
> Jan 26 12:57:11 tn3 vnet[1133419]: #3  0x7fdd5a5b7f52
> virtio_device_input_inline + 0x2f2
> Jan 26 12:57:11 tn3 vnet[1133419]: #4  0x7fdd5a5b7acb
> virtio_input_node_fn_skx + 0x19b
> Jan 26 12:57:11 tn3 vnet[1133419]: #5  0x7fdd59a3515d dispatch_node +
> 0x33d
> Jan 26 12:57:11 tn3 vnet[1133419]: #6  0x7fdd59a30c72
> vlib_main_or_worker_loop + 0x632
> Jan 26 12:57:11 tn3 vnet[1133419]: #7  0x7fdd59a3277a vlib_main_loop +
> 0x1a
> Jan 26 12:57:11 tn3 vnet[1133419]: #8  0x7fdd59a3229a vlib_main + 0x60a
> Jan 26 12:57:11 tn3 vnet[1133419]: #9  0x7fdd59a94a14 thread0 + 0x44
> Jan 26 12:57:11 tn3 vnet[1133419]: #10 0x7fdd598e43d8 0x7fdd598e43d8
> 
>
>

-- 
Best regards
Stanislav Zaikin
diff --git a/src/plugins/linux-cp/lcp_node.c b/src/plugins/linux-cp/lcp_node.c
index b00049884..c119b8bd7 100644
--- a/src/plugins/linux-cp/lcp_node.c
+++ b/src/plugins/linux-cp/lcp_node.c
@@ -15,6 +15,7 @@
  * limitations under the License.
  */
 
+#include "vnet/osi/osi.h"
 #include 
 #include 
 
@@ -935,6 +936,147 @@ VNET_FEATURE_INIT (lcp_arp_host_arp_feat, static) = {
   .runs_before = VNET_FEATURES ("arp-reply"),
 };
 
+typedef struct l2_punt_trace_t_
+{
+  u8 direction;
+  u32 phy_sw_if_index;
+  u32 host_sw_if_index;
+} l2_punt_trace_t;
+
+static u8 *
+format_l2_punt_trace (u8 *s, va_list *args)
+{
+  CLIB_UNUSED (vlib_main_t * vm) = va_arg (*args, vlib_main_t *);
+  CLIB_UNUSED (vlib_node_t * node) = va_arg (*args, vlib_node_t *);
+  l2_punt_trace_t *t = va_arg (*args, l2_punt_trace_t *);
+
+  if (t->direction)
+{
+  s = format (s, "l2-punt: %u -> %u", t->host_sw_if_index,
+		  t->phy_sw_if_index);
+}
+  else
+{
+  s = format (s, "l2-punt: %u -> %u", t->phy_sw_if_index,
+		  t->host_sw_if_index);
+}
+
+  return s;
+}
+
+VLIB_NODE_FN (l2_punt_node)
+(vlib_main_t *vm, vlib_node_runtime_t *node, vlib_frame_t *frame)
+{
+  u32 n_left_from, *from, *to_next, n_left_to_next;
+  lip_punt_next_t next_index;
+
+  next_index = node->cached_next_index;
+  n_left_from = frame->n_vectors;
+  from = vlib_frame_vector_args (frame);
+
+  while (n_left_from > 0)
+{
+  vlib_get_next_frame (vm, node, next_index, to_next, n_left_to_next);
+
+  while (n_left_from > 0 && n_left_to_next > 0)
+	{
+	  vlib_buffer_t *b0;
+	  const lcp_itf_pair_t *lip0 = NULL;
+	  u32 next0 = ~0;
+	  u32 bi0, lipi0;
+	  u32 sw_if_index0;
+	  u8 direction = 0;
+	  u8 len0;
+
+	  bi0 = to_next[0] = from[0];
+
+	  from += 1;
+	  to_next += 1;
+	  n_left_from -= 1;
+	  n_left_to_next -= 1;
+	  next0 = LIP_PUNT_NEXT_DROP;
+
+	  b0 = vlib_get_buffer (vm, bi0);
+
+	  sw_if_index0 = vnet_buffer (b0)->sw_if_index[VLIB_RX];
+	  lipi0 = lcp_itf_pair_find_by_phy (sw_if_index0);
+	  if (lipi0 == INDEX_INVALID)
+	{
+	  lipi0 = lcp_itf_pair_find_by_host (sw_if_index0);
+	  if (lipi0 == INDEX_INVALID)
+		goto trace0;
+
+	  direction = 1;
+	}
+
+	

Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-26 Thread agv100
Hi Stanislav,

Situation is better now, but still only half of the problem solved :-)
OSI IS-IS packets passed from network to tap, but not passed from tap to 
network.
These are on TAP interface, where:78 is VPP-based router, :7a is non-VPP peer, 
both directions are seen:
13:03:22.911887 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497
13:03:23.433773 3c:ec:ef:5f:77:8f > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497
,
These are on opposite side of link, (Linux IS-IS router without VPP), only 
outgoing packets are seen:
13:08:54.796588 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497
13:08:57.662629 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497

Also, it looks like lcp auto-subint is broken ; VPP aborts on ip link add  on TAP device, instead of creating subif. I'll provide back trace later 
on

Jan 26 12:57:04 tn3 vnet[1133419]: unix_signal_handler:191: received signal 
SIGWINCH, PC 0x7fdd59a34f41
Jan 26 12:57:11 tn3 vnet[1133419]: received signal SIGWINCH, PC 0x7fdd59a34f41
Jan 26 12:57:11 tn3 vnet[1133419]: #0  0x7fdd59a95c92 unix_signal_handler + 
0x1f2
Jan 26 12:57:11 tn3 vnet[1133419]: #1  0x7fdd59993420 0x7fdd59993420
Jan 26 12:57:11 tn3 vnet[1133419]: #2  0x7fdd5a5b8f00 
virtio_refill_vring_split + 0x60
Jan 26 12:57:11 tn3 vnet[1133419]: #3  0x7fdd5a5b7f52 
virtio_device_input_inline + 0x2f2
Jan 26 12:57:11 tn3 vnet[1133419]: #4  0x7fdd5a5b7acb 
virtio_input_node_fn_skx + 0x19b
Jan 26 12:57:11 tn3 vnet[1133419]: #5  0x7fdd59a3515d dispatch_node + 0x33d
Jan 26 12:57:11 tn3 vnet[1133419]: #6  0x7fdd59a30c72 
vlib_main_or_worker_loop + 0x632
Jan 26 12:57:11 tn3 vnet[1133419]: #7  0x7fdd59a3277a vlib_main_loop + 0x1a
Jan 26 12:57:11 tn3 vnet[1133419]: #8  0x7fdd59a3229a vlib_main + 0x60a
Jan 26 12:57:11 tn3 vnet[1133419]: #9  0x7fdd59a94a14 thread0 + 0x44
Jan 26 12:57:11 tn3 vnet[1133419]: #10 0x7fdd598e43d8 0x7fdd598e43d8

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22511): https://lists.fd.io/g/vpp-dev/message/22511
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-25 Thread Stanislav Zaikin
Could you check again with the following patch? The previous didn't work
because the linker couldn't find a symbol with the node (apparently, linux
cp plugin divided into 2 parts: a library and a plugin).

I replayed a pcap containing some OSI frames and got this trace:

00:00:18:025096: virtio-input
  virtio: hw_if_index 1 next-index 4 vring 0 len 1514
hdr: flags 0x00 gso_type 0x00 hdr_len 0 gso_size 0 csum_start 0
csum_offset 0 num_buffers 1
00:00:18:025161: ethernet-input
  frame: flags 0x1, hw-if-index 1, sw-if-index 1
  0x05dc: c2:03:29:a9:00:00 -> 01:80:c2:00:00:15
00:00:18:025187: llc-input
  LLC osi_layer5 -> osi_layer5
00:00:18:025198: *osi-input*
  OSI isis
00:00:18:025214: *linux-cp-punt*
  lip-punt: 1 -> 2
00:00:18:025236: tap1-output
  tap1 flags 0x00180005
  0x05dc: c2:03:29:a9:00:00 -> 01:80:c2:00:00:15
00:00:18:025255: tap1-tx
buffer 0x9f947: current data 0, length 1514, buffer-pool 0, ref-count
1, trace handle 0x1
l2-hdr-offset 0 l3-hdr-offset 14
  hdr-sz 0 l2-hdr-offset 0 l3-hdr-offset 14 l4-hdr-offset 0 l4-hdr-sz 0
  0x05dc: c2:03:29:a9:00:00 -> 01:80:c2:00:00:15

diff --git a/src/plugins/linux-cp/lcp_interface.c
b/src/plugins/linux-cp/lcp_interface.c
index eef06ecfa..81ec6f9ec 100644
--- a/src/plugins/linux-cp/lcp_interface.c
+++ b/src/plugins/linux-cp/lcp_interface.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 

 vlib_log_class_t lcp_itf_pair_logger;

@@ -1206,6 +1207,10 @@ lcp_interface_init (vlib_main_t *vm)

   lcp_itf_pair_logger = vlib_log_register_class ("linux-cp", "itf");

+  vlib_node_t *lip_punt_node = vlib_get_node_by_name (vm, (u8 *)
"linux-cp-punt");
+  if (lip_punt_node)
+osi_register_input_protocol (OSI_PROTOCOL_isis, lip_punt_node->index);
+
   return NULL;
 }

On Wed, 25 Jan 2023 at 16:11,  wrote:

> Hi Stanislav!
>
> Here is it!
>
> 00:31:48:504910: dpdk-input
>   TenGigabitEthernet1c/0/1 rx queue 0
>   buffer 0x9ad69: current data 0, length 1518, buffer-pool 0, ref-count 1,
> trace handle 0xb
>   ext-hdr-valid
>   PKT MBUF: port 0, nb_segs 1, pkt_len 1518
> buf_len 2176, data_len 1518, ol_flags 0x180, data_off 128, phys_addr
> 0x188b5ac0
> packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Offload Flags
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>   PKT_RX_IP_CKSUM_NONE (0x0090) no IP cksum of RX pkt.
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
>   PKT_RX_L4_CKSUM_NONE (0x0108) no L4 cksum of RX pkt.
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   0x05dc: 3c:ec:ef:5f:78:7a -> 09:00:2b:00:00:05 802.1q vlan 1914
> 00:31:48:504913: ethernet-input
>   frame: flags 0x3, hw-if-index 1, sw-if-index 1
>   0x05dc: 3c:ec:ef:5f:78:7a -> 09:00:2b:00:00:05 802.1q vlan 1914
> 00:31:48:504917: llc-input
>   LLC osi_layer5 -> osi_layer5
> 00:31:48:504918: osi-input
>   OSI isis
> 00:31:48:504919: error-drop
>   rx:TenGigabitEthernet1c/0/1.1914
> 00:31:48:504920: drop
>   osi-input: unknown osi protocol
> 
>
>

-- 
Best regards
Stanislav Zaikin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22510): https://lists.fd.io/g/vpp-dev/message/22510
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-25 Thread agv100
Hi Stanislav!

Here is it!

00:31:48:504910: dpdk-input
TenGigabitEthernet1c/0/1 rx queue 0
buffer 0x9ad69: current data 0, length 1518, buffer-pool 0, ref-count 1, trace 
handle 0xb
ext-hdr-valid
PKT MBUF: port 0, nb_segs 1, pkt_len 1518
buf_len 2176, data_len 1518, ol_flags 0x180, data_off 128, phys_addr 0x188b5ac0
packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
PKT_RX_IP_CKSUM_NONE (0x0090) no IP cksum of RX pkt.
PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
PKT_RX_L4_CKSUM_NONE (0x0108) no L4 cksum of RX pkt.
Packet Types
RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
0x05dc: 3c:ec:ef:5f:78:7a -> 09:00:2b:00:00:05 802.1q vlan 1914
00:31:48:504913: ethernet-input
frame: flags 0x3, hw-if-index 1, sw-if-index 1
0x05dc: 3c:ec:ef:5f:78:7a -> 09:00:2b:00:00:05 802.1q vlan 1914
00:31:48:504917: llc-input
LLC osi_layer5 -> osi_layer5
00:31:48:504918: osi-input
OSI isis
00:31:48:504919: error-drop
rx:TenGigabitEthernet1c/0/1.1914
00:31:48:504920: drop
osi-input: unknown osi protocol

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22509): https://lists.fd.io/g/vpp-dev/message/22509
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes because of API segment exhaustion

2023-01-25 Thread Alexander Chernavin via lists.fd.io
Hello Florin,

> 
> Agreed that it looks like vl_api_clnt_process sleeps, probably because it
> hit a queue size of 0, but memclnt_queue_callback or the timeout, albeit
> 20s is a lot, should wake it up.

It doesn't look like vl_api_clnt_process would have woken up later. Firstly, 
because QUEUE_SIGNAL_EVENT was already signaled and vm->queue_signal_pending 
was set. And memclnt_queue_callback() is only triggered if 
vm->queue_signal_pending is unset. Thus, no new calls of 
memclnt_queue_callback() would have happened while vm->queue_signal_pending was 
set. Secondly, the timer id that vl_api_clnt_process holds belongs to another 
process node. Even if the timer was valid, the other process node would have 
been triggered by it.

> 
> So, given that QUEUE_SIGNAL_EVENT is set, the only thing that comes to
> mind is that maybe somehow vlib_process_signal_event context gets
> corrupted. Could you run a debug image and see if anything asserts? Is
> vlib_process_signal_event called by chance from a worker?

It's problematic to run a debug version of VPP on the affected instances.

There are no signs of vlib_process_signal_event() being called from a worker 
thread. If look at memclnt_queue_callback(), it is called only in the main 
thread.

Regards,
Alexander

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22508): https://lists.fd.io/g/vpp-dev/message/22508
Mute This Topic: https://lists.fd.io/mt/96500275/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes because of API segment exhaustion

2023-01-24 Thread Florin Coras
Hi Alexander, 

Quick reply. 

Nice bug report! Agreed that it looks like vl_api_clnt_process sleeps, probably 
because it hit a queue size of 0, but memclnt_queue_callback or the timeout, 
albeit 20s is a lot, should wake it up. 

So, given that QUEUE_SIGNAL_EVENT is set, the only thing that comes to mind is 
that maybe somehow vlib_process_signal_event context gets corrupted. Could you 
run a debug image and see if anything asserts? Is vlib_process_signal_event 
called by chance from a worker?

Regards,
Florin

> On Jan 24, 2023, at 7:59 AM, Alexander Chernavin via lists.fd.io 
>  wrote:
> 
> Hello all,
> 
> We are experiencing VPP crashes that occur a few days after the startup 
> because of API segment exhaustion. Increasing API segment size to 256MB 
> didn't stop the crashes from occurring.
> 
> Can you please take a look at the description below and tell us if you have 
> seen similar issues or have any ideas what the cause may be?
> 
> Given:
> VPP 22.10
> 2 worker threads
> API segment size is 256MB
> ~893k IPv4 routes and ~160k IPv6 routes added
> 
> Backtrace:
>> [..]
>> #32660 0x55b02f606896 in os_panic () at 
>> /home/jenkins/tnsr-pkgs/work/vpp/src/vpp/vnet/main.c:414
>> #32661 0x7fce3c0ec740 in clib_mem_heap_alloc_inline (heap=0x0, 
>> size=, align=8, 
>> os_out_of_memory_on_failure=1) at 
>> /home/jenkins/tnsr-pkgs/work/vpp/src/vppinfra/mem_dlmalloc.c:613
>> #32662 clib_mem_alloc (size=)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vppinfra/mem_dlmalloc.c:628
>> #32663 0x7fce3dc4ee6f in vl_msg_api_alloc_internal (vlib_rp=0x130026000, 
>> nbytes=69, pool=0, 
>> may_return_null=0) at 
>> /home/jenkins/tnsr-pkgs/work/vpp/src/vlibmemory/memory_shared.c:179
>> #32664 0x7fce3dc592cd in vl_api_rpc_call_main_thread_inline (force_rpc=0 
>> '\000', 
>> fp=, data=, data_length=)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlibmemory/memclnt_api.c:617
>> #32665 vl_api_rpc_call_main_thread (fp=0x7fce3c74de70 , 
>> data=0x7fcc372bdc00 "& \001$ ", data_length=28)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlibmemory/memclnt_api.c:641
>> #32666 0x7fce3cc7fe2d in icmp6_neighbor_solicitation_or_advertisement 
>> (vm=0x7fccc0864000, 
>> frame=0x7fcccd7d2d40, is_solicitation=1, node=)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vnet/ip6-nd/ip6_nd.c:163
>> #32667 icmp6_neighbor_solicitation (vm=0x7fccc0864000, node=0x7fccc09e3380, 
>> frame=0x7fcccd7d2d40)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vnet/ip6-nd/ip6_nd.c:322
>> #32668 0x7fce3c1a2fe0 in dispatch_node (vm=0x7fccc0864000, 
>> node=0x7fce3dc74836, 
>> type=VLIB_NODE_TYPE_INTERNAL, dispatch_state=VLIB_NODE_STATE_POLLING, 
>> frame=0x7fcccd7d2d40, 
>> last_time_stamp=4014159654296481) at 
>> /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/main.c:961
>> #32669 dispatch_pending_node (vm=0x7fccc0864000, pending_frame_index=7, 
>> last_time_stamp=4014159654296481) at 
>> /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/main.c:1120
>> #32670 vlib_main_or_worker_loop (vm=0x7fccc0864000, is_main=0)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/main.c:1589
>> #32671 vlib_worker_loop (vm=vm@entry=0x7fccc0864000)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/main.c:1723
>> #32672 0x7fce3c1f581a in vlib_worker_thread_fn (arg=0x7fccbdb11b40)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/threads.c:1579
>> #32673 0x7fce3c1f02c1 in vlib_worker_thread_bootstrap_fn 
>> (arg=0x7fccbdb11b40)
>> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/threads.c:418
>> #32674 0x7fce3be3db43 in start_thread (arg=) at 
>> ./nptl/pthread_create.c:442
>> #32675 0x7fce3becfa00 in clone3 () at 
>> ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
> 
> According to the backtrace, an IPv6 neighbor is being learned. Since the 
> packet was received on a worker thread, the neighbor information is being 
> passed to the main thread by making an RPC call (that works via the API). For 
> this, an API message for RPC call is being allocated from the API segment (as 
> а client). But the allocation is failing because of no available memory.
> 
> If inspect the API rings after crashing, it can be seen that they are all 
> filled with VL_API_RPC_CALL messages. Also, it can be seen that there are a 
> lot of pending RPC requests (vm->pending_rpc_requests has ~3.3M items). Thus, 
> API segment exhaustion occurs because of a huge number of pending RPC 
> messages.
> 
> RPC messages are processed in a process node called api-rx-from-ring 
> (function is called vl_api_clnt_process). And process nodes are handled in 
> the main thread only.
> 
> First hypothesis is that the main loop of the main thread pauses for such a 
> long time that a huge number of pending RPC messages are accumulated by the 
> worker threads (that keep running). But this doesn't seem to be confirmed if 
> inspect vm->loop_interval_start of all threads after crashing. 
> vm->loop_interval_start of the worker threads would have been greater 

Re: [vpp-dev] VPP Linux-CP/Linux-NL : MPLS?

2023-01-24 Thread Pim van Pelt via lists.fd.io
Hoi,

MPLS is not supported in Linux CP. It is a regularly requested feature, but
not quite as straight forward. Contributions welcome!

groet,
Pim

On Tue, Jan 24, 2023 at 5:16 PM  wrote:

> Hello,
>
> I'm trying to populate MPLS FIB via Linux-CP plugin.
> MPLS records are created via FRR and populated to Linux Kernel routing
> table (I use default ns). Below one can see "push" operation and "swap"
> operation.
> mpls table 0 was created in vpp by "mpls table add 0" command.
> mpls was enabled on all the interfaces, both towards media and taps.
> Still, do not see anything in FIB. Should MPLS tables sync work, or may be,
> I forgot setup something in VPP?
>
> root@tn3:/home/abramov# ip -f mpls route show
> 40050 as to 41000 via inet6 fd00:200::2 dev Ten0.1914 proto static
> root@tn3:/home/abramov# ip -6 route show | grep 4
> fd00:100::4 nhid 209  encap mpls  4 via fd00:200::2 dev Ten0.1914
> proto static metric 20 pref medium
> root@tn3:/home/abramov# vppctl
>
> vpp# show mpls fib 0 40050
> MPLS-VRF:0, fib_index:1 locks:[interface:4, CLI:1, ]
> vpp# show ip6 fib
> ipv6-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[adjacency:1, default-route:1, lcp-rt:1, ]
> ::/0
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:6 buckets:1 uRPF:5 to:[0:0]]
> [0] [@0]: dpo-drop ip6
> fd00:100::4/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:17 buckets:1 uRPF:17 to:[0:0]]
> [0] [@5]: ipv6 via fd00:200::2 TenGigabitEthernet1c/0/1.1914: mtu:9000
> next:5 flags:[] 2af08d2cf6163cecef5f778f8100077a86dd
> fd00:200::/64
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:15 buckets:1 uRPF:14 to:[0:0]]
> [0] [@4]: ipv6-glean: [src:fd00:200::/64]
> TenGigabitEthernet1c/0/1.1914: mtu:9000 next:2 flags:[]
> 3cecef5f778f8100077a86dd
> fd00:200::1/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:16 buckets:1 uRPF:15
> to:[10:848]]
> [0] [@20]: dpo-receive: fd00:200::1 on TenGigabitEthernet1c/0/1.1914
> fd00:200::2/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:18 buckets:1 uRPF:12 to:[0:0]]
> [0] [@5]: ipv6 via fd00:200::2 TenGigabitEthernet1c/0/1.1914: mtu:9000
> next:5 flags:[] 2af08d2cf6163cecef5f778f8100077a86dd
> fe80::/10
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:7 buckets:1 uRPF:6 to:[8:544]]
> [0] [@14]: ip6-link-local
> vpp# show mpls fib
> MPLS-VRF:0, fib_index:1 locks:[interface:4, CLI:1, ]
> ip4-explicit-null:neos/21 fib:1 index:30 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[43] locks:2 flags:exclusive, uPRF-list:31 len:0 itfs:[]
>   path:[53] pl-index:43 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dst-address,unicast lookup in interface's mpls table
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:33 buckets:1 uRPF:31 to:[0:0]]
> [0] [@4]: dst-address,unicast lookup in interface's mpls table
> ip4-explicit-null:eos/21 fib:1 index:29 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[42] locks:2 flags:exclusive, uPRF-list:30 len:0 itfs:[]
>   path:[52] pl-index:42 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dst-address,unicast lookup in interface's ip4 table
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:30 to:[0:0]]
> [0] [@3]: dst-address,unicast lookup in interface's ip4 table
> router-alert:neos/21 fib:1 index:27 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[40] locks:2 flags:exclusive, uPRF-list:28 len:0 itfs:[]
>   path:[50] pl-index:40 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dpo-punt
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:30 buckets:1 uRPF:28 to:[0:0]]
> [0] [@2]: dpo-punt
> router-alert:eos/21 fib:1 index:28 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[41] locks:2 flags:exclusive, uPRF-list:29 len:0 itfs:[]
>   path:[51] pl-index:41 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dpo-punt
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:31 buckets:1 uRPF:29 to:[0:0]]
> [0] [@2]: dpo-punt
> ipv6-explicit-null:neos/21 fib:1 index:32 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[45] locks:2 flags:exclusive, uPRF-list:33 len:0 itfs:[]
>   path:[55] pl-index:45 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dst-address,unicast lookup in interface's mpls table
>
>  forwarding:   

Re: [vpp-dev] VPP Linux-CP/Linux-NL : MPLS?

2023-01-24 Thread Matthew Smith via lists.fd.io
No, this is not currently supported. MPLS configuration is not synched from
the host system using linux-nl. IP routes/addresses/neighbors and some
interface attributes (admin state, MTU, MAC address) are synched.

-Matt


On Tue, Jan 24, 2023 at 10:16 AM  wrote:

> Hello,
>
> I'm trying to populate MPLS FIB via Linux-CP plugin.
> MPLS records are created via FRR and populated to Linux Kernel routing
> table (I use default ns). Below one can see "push" operation and "swap"
> operation.
> mpls table 0 was created in vpp by "mpls table add 0" command.
> mpls was enabled on all the interfaces, both towards media and taps.
> Still, do not see anything in FIB. Should MPLS tables sync work, or may be,
> I forgot setup something in VPP?
>
> root@tn3:/home/abramov# ip -f mpls route show
> 40050 as to 41000 via inet6 fd00:200::2 dev Ten0.1914 proto static
> root@tn3:/home/abramov# ip -6 route show | grep 4
> fd00:100::4 nhid 209  encap mpls  4 via fd00:200::2 dev Ten0.1914
> proto static metric 20 pref medium
> root@tn3:/home/abramov# vppctl
>
> vpp# show mpls fib 0 40050
> MPLS-VRF:0, fib_index:1 locks:[interface:4, CLI:1, ]
> vpp# show ip6 fib
> ipv6-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ]
> epoch:0 flags:none locks:[adjacency:1, default-route:1, lcp-rt:1, ]
> ::/0
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:6 buckets:1 uRPF:5 to:[0:0]]
> [0] [@0]: dpo-drop ip6
> fd00:100::4/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:17 buckets:1 uRPF:17 to:[0:0]]
> [0] [@5]: ipv6 via fd00:200::2 TenGigabitEthernet1c/0/1.1914: mtu:9000
> next:5 flags:[] 2af08d2cf6163cecef5f778f8100077a86dd
> fd00:200::/64
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:15 buckets:1 uRPF:14 to:[0:0]]
> [0] [@4]: ipv6-glean: [src:fd00:200::/64]
> TenGigabitEthernet1c/0/1.1914: mtu:9000 next:2 flags:[]
> 3cecef5f778f8100077a86dd
> fd00:200::1/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:16 buckets:1 uRPF:15
> to:[10:848]]
> [0] [@20]: dpo-receive: fd00:200::1 on TenGigabitEthernet1c/0/1.1914
> fd00:200::2/128
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:18 buckets:1 uRPF:12 to:[0:0]]
> [0] [@5]: ipv6 via fd00:200::2 TenGigabitEthernet1c/0/1.1914: mtu:9000
> next:5 flags:[] 2af08d2cf6163cecef5f778f8100077a86dd
> fe80::/10
>   unicast-ip6-chain
>   [@0]: dpo-load-balance: [proto:ip6 index:7 buckets:1 uRPF:6 to:[8:544]]
> [0] [@14]: ip6-link-local
> vpp# show mpls fib
> MPLS-VRF:0, fib_index:1 locks:[interface:4, CLI:1, ]
> ip4-explicit-null:neos/21 fib:1 index:30 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[43] locks:2 flags:exclusive, uPRF-list:31 len:0 itfs:[]
>   path:[53] pl-index:43 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dst-address,unicast lookup in interface's mpls table
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:33 buckets:1 uRPF:31 to:[0:0]]
> [0] [@4]: dst-address,unicast lookup in interface's mpls table
> ip4-explicit-null:eos/21 fib:1 index:29 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[42] locks:2 flags:exclusive, uPRF-list:30 len:0 itfs:[]
>   path:[52] pl-index:42 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dst-address,unicast lookup in interface's ip4 table
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:30 to:[0:0]]
> [0] [@3]: dst-address,unicast lookup in interface's ip4 table
> router-alert:neos/21 fib:1 index:27 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[40] locks:2 flags:exclusive, uPRF-list:28 len:0 itfs:[]
>   path:[50] pl-index:40 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dpo-punt
>
>  forwarding:   mpls-neos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:30 buckets:1 uRPF:28 to:[0:0]]
> [0] [@2]: dpo-punt
> router-alert:eos/21 fib:1 index:28 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[41] locks:2 flags:exclusive, uPRF-list:29 len:0 itfs:[]
>   path:[51] pl-index:41 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: dpo-punt
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:31 buckets:1 uRPF:29 to:[0:0]]
> [0] [@2]: dpo-punt
> ipv6-explicit-null:neos/21 fib:1 index:32 locks:2
>   special refs:1 entry-flags:exclusive,
> src-flags:added,contributing,active,
> path-list:[45] locks:2 flags:exclusive, uPRF-list:33 len:0 itfs:[]
>   path:[55] pl-index:45 mpls weight=1 pref=0 exclusive:
> oper-flags:resolved, cfg-flags:exclusive,
> [@0]: 

[vpp-dev] VPP Linux-CP/Linux-NL : MPLS?

2023-01-24 Thread agv100
Hello,

I'm trying to populate MPLS FIB via Linux-CP plugin.
MPLS records are created via FRR and populated to Linux Kernel routing table (I 
use default ns). Below one can see "push" operation and "swap" operation.
mpls table 0 was created in vpp by "mpls table add 0" command.
mpls was enabled on all the interfaces, both towards media and taps. Still, do 
not see anything in FIB. Should MPLS tables sync work, or may be, I forgot 
setup something in VPP?

root@tn3:/home/abramov# ip -f mpls route show
40050 as to 41000 via inet6 fd00:200::2 dev Ten0.1914 proto static
root@tn3:/home/abramov# ip -6 route show | grep 4
fd00:100::4 nhid 209  encap mpls  4 via fd00:200::2 dev Ten0.1914 proto 
static metric 20 pref medium
root@tn3:/home/abramov# vppctl

vpp# show mpls fib 0 40050
MPLS-VRF:0, fib_index:1 locks:[interface:4, CLI:1, ]
vpp# show ip6 fib
ipv6-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[adjacency:1, default-route:1, lcp-rt:1, ]
::/0
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:6 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd00:100::4/128
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:17 buckets:1 uRPF:17 to:[0:0]]
[0] [@5]: ipv6 via fd00:200::2 TenGigabitEthernet1c/0/1.1914: mtu:9000 next:5 
flags:[] 2af08d2cf6163cecef5f778f8100077a86dd
fd00:200::/64
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:15 buckets:1 uRPF:14 to:[0:0]]
[0] [@4]: ipv6-glean: [src:fd00:200::/64] TenGigabitEthernet1c/0/1.1914: 
mtu:9000 next:2 flags:[] 3cecef5f778f8100077a86dd
fd00:200::1/128
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:16 buckets:1 uRPF:15 to:[10:848]]
[0] [@20]: dpo-receive: fd00:200::1 on TenGigabitEthernet1c/0/1.1914
fd00:200::2/128
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:18 buckets:1 uRPF:12 to:[0:0]]
[0] [@5]: ipv6 via fd00:200::2 TenGigabitEthernet1c/0/1.1914: mtu:9000 next:5 
flags:[] 2af08d2cf6163cecef5f778f8100077a86dd
fe80::/10
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:7 buckets:1 uRPF:6 to:[8:544]]
[0] [@14]: ip6-link-local
vpp# show mpls fib
MPLS-VRF:0, fib_index:1 locks:[interface:4, CLI:1, ]
ip4-explicit-null:neos/21 fib:1 index:30 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[43] locks:2 flags:exclusive, uPRF-list:31 len:0 itfs:[]
path:[53] pl-index:43 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dst-address,unicast lookup in interface's mpls table

forwarding:   mpls-neos-chain
[@0]: dpo-load-balance: [proto:mpls index:33 buckets:1 uRPF:31 to:[0:0]]
[0] [@4]: dst-address,unicast lookup in interface's mpls table
ip4-explicit-null:eos/21 fib:1 index:29 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[42] locks:2 flags:exclusive, uPRF-list:30 len:0 itfs:[]
path:[52] pl-index:42 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dst-address,unicast lookup in interface's ip4 table

forwarding:   mpls-eos-chain
[@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:30 to:[0:0]]
[0] [@3]: dst-address,unicast lookup in interface's ip4 table
router-alert:neos/21 fib:1 index:27 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[40] locks:2 flags:exclusive, uPRF-list:28 len:0 itfs:[]
path:[50] pl-index:40 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dpo-punt

forwarding:   mpls-neos-chain
[@0]: dpo-load-balance: [proto:mpls index:30 buckets:1 uRPF:28 to:[0:0]]
[0] [@2]: dpo-punt
router-alert:eos/21 fib:1 index:28 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[41] locks:2 flags:exclusive, uPRF-list:29 len:0 itfs:[]
path:[51] pl-index:41 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dpo-punt

forwarding:   mpls-eos-chain
[@0]: dpo-load-balance: [proto:mpls index:31 buckets:1 uRPF:29 to:[0:0]]
[0] [@2]: dpo-punt
ipv6-explicit-null:neos/21 fib:1 index:32 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[45] locks:2 flags:exclusive, uPRF-list:33 len:0 itfs:[]
path:[55] pl-index:45 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dst-address,unicast lookup in interface's mpls table

forwarding:   mpls-neos-chain
[@0]: dpo-load-balance: [proto:mpls index:35 buckets:1 uRPF:33 to:[0:0]]
[0] [@4]: dst-address,unicast lookup in interface's mpls table
ipv6-explicit-null:eos/21 fib:1 index:31 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[44] locks:2 flags:exclusive, uPRF-list:32 len:0 itfs:[]
path:[54] pl-index:44 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dst-address,unicast lookup in interface's ip6 table

forwarding:   mpls-eos-chain
[@0]: dpo-load-balance: [proto:mpls 

[vpp-dev] VPP crashes because of API segment exhaustion

2023-01-24 Thread Alexander Chernavin via lists.fd.io
Hello all,

We are experiencing VPP crashes that occur a few days after the startup
because of API segment exhaustion. Increasing API segment size to 256MB
didn't stop the crashes from occurring.

Can you please take a look at the description below and tell us if you have
seen similar issues or have any ideas what the cause may be?

Given:

   - VPP 22.10
   - 2 worker threads
   - API segment size is 256MB
   - ~893k IPv4 routes and ~160k IPv6 routes added


Backtrace:

> [..]
> #32660 0x55b02f606896 in os_panic () at
> /home/jenkins/tnsr-pkgs/work/vpp/src/vpp/vnet/main.c:414
> #32661 0x7fce3c0ec740 in clib_mem_heap_alloc_inline (heap=0x0,
> size=, align=8,
> os_out_of_memory_on_failure=1) at
> /home/jenkins/tnsr-pkgs/work/vpp/src/vppinfra/mem_dlmalloc.c:613
> #32662 clib_mem_alloc (size=)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vppinfra/mem_dlmalloc.c:628
> #32663 0x7fce3dc4ee6f in vl_msg_api_alloc_internal
> (vlib_rp=0x130026000, nbytes=69, pool=0,
> may_return_null=0) at
> /home/jenkins/tnsr-pkgs/work/vpp/src/vlibmemory/memory_shared.c:179
> #32664 0x7fce3dc592cd in vl_api_rpc_call_main_thread_inline
> (force_rpc=0 '\000',
> fp=, data=, data_length=)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlibmemory/memclnt_api.c:617
> #32665 vl_api_rpc_call_main_thread (fp=0x7fce3c74de70 ,
> data=0x7fcc372bdc00 "& \001$ ", data_length=28)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlibmemory/memclnt_api.c:641
> #32666 0x7fce3cc7fe2d in icmp6_neighbor_solicitation_or_advertisement
> (vm=0x7fccc0864000,
> frame=0x7fcccd7d2d40, is_solicitation=1, node=)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vnet/ip6-nd/ip6_nd.c:163
> #32667 icmp6_neighbor_solicitation (vm=0x7fccc0864000,
> node=0x7fccc09e3380, frame=0x7fcccd7d2d40)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vnet/ip6-nd/ip6_nd.c:322
> #32668 0x7fce3c1a2fe0 in dispatch_node (vm=0x7fccc0864000,
> node=0x7fce3dc74836,
> type=VLIB_NODE_TYPE_INTERNAL, dispatch_state=VLIB_NODE_STATE_POLLING,
> frame=0x7fcccd7d2d40,
> last_time_stamp=4014159654296481) at
> /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/main.c:961
> #32669 dispatch_pending_node (vm=0x7fccc0864000, pending_frame_index=7,
> last_time_stamp=4014159654296481) at
> /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/main.c:1120
> #32670 vlib_main_or_worker_loop (vm=0x7fccc0864000, is_main=0)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/main.c:1589
> #32671 vlib_worker_loop (vm=vm@entry=0x7fccc0864000)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/main.c:1723
> #32672 0x7fce3c1f581a in vlib_worker_thread_fn (arg=0x7fccbdb11b40)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/threads.c:1579
> #32673 0x7fce3c1f02c1 in vlib_worker_thread_bootstrap_fn
> (arg=0x7fccbdb11b40)
> at /home/jenkins/tnsr-pkgs/work/vpp/src/vlib/threads.c:418
> #32674 0x7fce3be3db43 in start_thread (arg=) at
> ./nptl/pthread_create.c:442
> #32675 0x7fce3becfa00 in clone3 () at
> ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
>

According to the backtrace, an IPv6 neighbor is being learned. Since the
packet was received on a worker thread, the neighbor information is being
passed to the main thread by making an RPC call (that works via the API).
For this, an API message for RPC call is being allocated from the API
segment (as а client). But the allocation is failing because of no
available memory.

If inspect the API rings after crashing, it can be seen that they are all
filled with VL_API_RPC_CALL messages. Also, it can be seen that there are a
lot of pending RPC requests (vm->pending_rpc_requests has ~3.3M items).
Thus, API segment exhaustion occurs because of a huge number of pending RPC
messages.

RPC messages are processed in a process node called api-rx-from-ring
(function is called vl_api_clnt_process). And process nodes are handled in
the main thread only.

First hypothesis is that the main loop of the main thread pauses for such a
long time that a huge number of pending RPC messages are accumulated by the
worker threads (that keep running). But this doesn't seem to be confirmed
if inspect vm->loop_interval_start of all threads after crashing.
vm->loop_interval_start of the worker threads would have been greater
than vm->loop_interval_start of the main thread.

> (gdb) p vlib_global_main.vlib_mains[0]->loop_interval_start
> $117 = 197662.55595008997
> (gdb) p vlib_global_main.vlib_mains[1]->loop_interval_start
> $119 = 197659.82887979984
> (gdb) p vlib_global_main.vlib_mains[2]->loop_interval_start
> $121 = 197659.93944517447
>

Second hypothesis is that pending RPC messages stop being processed
completely at some point and keep being accumulated while the memory
permits. This seems to be confirmed if inspect the process node after
crashing. It can be seen that vm->main_loop_count is much bigger than the
process node's main_loop_count_last_dispatch (difference is ~50M
iterations). Although, according to the flags, the node is waiting for

Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-24 Thread Stanislav Zaikin
That's not surprising, could you also show me a trace? (trace add
dpdk-input 10 and then show trace with ISIS packet)

On Tue, 24 Jan 2023 at 16:25,  wrote:

> Hi Stanislav,
>
> Unfortunately, your patch didn't help. VPP builds, but IS-IS packets still
> cannot be passed between the CP and the wire.
>
> Furthermore, it looks like LCP lcp-auto-subint feature was broken:
>
> root@tn3:/home/abramov/vpp# vppctl
> _____   _  ___
>  __/ __/ _ \  (_)__| | / / _ \/ _ \
>  _/ _// // / / / _ \   | |/ / ___/ ___/
>  /_/ /(_)_/\___/   |___/_/  /_/
>
> vpp#
> vpp#
> vpp#
> vpp# show interface
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
> TenGigabitEthernet1c/0/1  1 down 9000/0/0/0
> local00 down  0/0/0/0
> vpp# set interface state TenGigabitEthernet1c/0/1 up
> vpp# lcp create 1 host-if Ten0
> vpp# show interface
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
> TenGigabitEthernet1c/0/1  1  up  9000/0/0/0 rx
> packets  2451
> rx
> bytes  228627
> tx
> packets 7
> tx
> bytes 746
>
> drops   2451
>
> ip49
>
> ip62
> local00 down  0/0/0/0
> tap1  2  up  9000/0/0/0 rx
> packets 7
> rx
> bytes 746
>
> ip67
> vpp# quit
> root@tn3:/home/abramov/vpp# ip link set Ten0 up
> root@tn3:/home/abramov/vpp# vppctl
> _____   _  ___
>  __/ __/ _ \  (_)__| | / / _ \/ _ \
>  _/ _// // / / / _ \   | |/ / ___/ ___/
>  /_/ /(_)_/\___/   |___/_/  /_/
>
> vpp# lcp lcp
> lcp-auto-subint  lcp-sync
> vpp# lcp lcp-auto-subint on
> vpp# lcp lcp-sync on
> vpp# show lcp
> lcp default netns ''
> lcp lcp-auto-subint on
> lcp lcp-sync on
> lcp del-static-on-link-down off
> lcp del-dynamic-on-link-down off
> itf-pair: [0] TenGigabitEthernet1c/0/1 tap1 Ten0 1248 type tap
> vpp# quit
> root@tn3:/home/abramov/vpp# ip link add Ten0.1914 link Ten0 type vlan id
> 1914
> root@tn3:/home/abramov/vpp# ip link set Ten0.1914 up
> root@tn3:/home/abramov/vpp# vppctl
> _____   _  ___
>  __/ __/ _ \  (_)__| | / / _ \/ _ \
>  _/ _// // / / / _ \   | |/ / ___/ ___/
>  /_/ /(_)_/\___/   |___/_/  /_/
>
> vpp# show int
>   Name   IdxState  MTU (L3/IP4/IP6/MPLS)
> Counter  Count
> TenGigabitEthernet1c/0/1  1  up  9000/0/0/0 rx
> packets 16501
> rx
> bytes 1519839
> tx
> packets 7
> tx
> bytes 746
>
> drops  16501
>
> ip4   39
>
> ip68
> local00 down  0/0/0/0
> tap1  2  up  9000/0/0/0 rx
> packets17
> rx
> bytes   19710
>
> drops 10
>
> ip67
>
>
> vpp# show node counters
>Count  Node
> Reason   Severity
> 10 lldp-inputlldp packets received on
> disabled i   error
>516 dpdk-input  no
> errorerror
> 21arp-disabled   ARP
> Disabled  error
> 74 osi-input unknown osi
> protocol  error
>  5 snap-input unknown oui/snap
> protocolerror
> 11   ethernet-input unknown ethernet
> type  error
>  74127   ethernet-input  unknown
> vlan  error
>145   ethernet-input   subinterface
> downerror
> vpp#
> 
>
>

-- 
Best regards
Stanislav Zaikin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22502): https://lists.fd.io/g/vpp-dev/message/22502
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: 

Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-24 Thread agv100
Hi Stanislav,

Unfortunately, your patch didn't help. VPP builds, but IS-IS packets still 
cannot be passed between the CP and the wire.

Furthermore, it looks like LCP lcp-auto-subint feature was broken:

root@tn3:/home/abramov/vpp# vppctl
___    _    _   _  ___
__/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /(_)_/\___/   |___/_/  /_/

vpp#
vpp#
vpp#
vpp# show interface
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
TenGigabitEthernet1c/0/1  1 down 9000/0/0/0
local0    0 down  0/0/0/0
vpp# set interface state TenGigabitEthernet1c/0/1 up
vpp# lcp create 1 host-if Ten0
vpp# show interface
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
TenGigabitEthernet1c/0/1  1  up  9000/0/0/0 rx packets  
    2451
rx bytes  228627
tx packets 7
tx bytes 746
drops   2451
ip4    9
ip6    2
local0    0 down  0/0/0/0
tap1  2  up  9000/0/0/0 rx packets  
   7
rx bytes 746
ip6    7
vpp# quit
root@tn3:/home/abramov/vpp# ip link set Ten0 up
root@tn3:/home/abramov/vpp# vppctl
___    _    _   _  ___
__/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /(_)_/\___/   |___/_/  /_/

vpp# lcp lcp
lcp-auto-subint  lcp-sync
vpp# lcp lcp-auto-subint on
vpp# lcp lcp-sync on
vpp# show lcp
lcp default netns ''
lcp lcp-auto-subint on
lcp lcp-sync on
lcp del-static-on-link-down off
lcp del-dynamic-on-link-down off
itf-pair: [0] TenGigabitEthernet1c/0/1 tap1 Ten0 1248 type tap
vpp# quit
root@tn3:/home/abramov/vpp# ip link add Ten0.1914 link Ten0 type vlan id 1914
root@tn3:/home/abramov/vpp# ip link set Ten0.1914 up
root@tn3:/home/abramov/vpp# vppctl
___    _    _   _  ___
__/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /(_)_/\___/   |___/_/  /_/

vpp# show int
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
TenGigabitEthernet1c/0/1  1  up  9000/0/0/0 rx packets  
   16501
rx bytes 1519839
tx packets 7
tx bytes 746
drops  16501
ip4   39
ip6    8
local0    0 down  0/0/0/0
tap1  2  up  9000/0/0/0 rx packets  
  17
rx bytes   19710
drops 10
ip6    7

vpp# show node counters
Count  Node  Reason   
Severity
10 lldp-input    lldp packets received on disabled i   error
516 dpdk-input  no error    
error
21    arp-disabled   ARP Disabled  error
74 osi-input unknown osi protocol  error
5 snap-input unknown oui/snap protocol    error
11   ethernet-input unknown ethernet type  error
74127   ethernet-input  unknown vlan  
error
145   ethernet-input   subinterface down    
error
vpp#

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22501): https://lists.fd.io/g/vpp-dev/message/22501
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-23 Thread Stanislav Zaikin
Hello folks,

In old router plugin it was done with the following snippet:
```
#include 
...
osi_register_input_protocol (OSI_PROTOCOL_isis, im->tx_node_index);
```

So, I'd give it a try to register this protocol for `lip_punt_node`
(`lcp_router_init` seems quite suitable). Something like this:
diff --git a/src/plugins/linux-cp/lcp.h b/src/plugins/linux-cp/lcp.h
index 4ddaa3898..51f6b1b3e 100644
--- a/src/plugins/linux-cp/lcp.h
+++ b/src/plugins/linux-cp/lcp.h
@@ -52,6 +52,8 @@ u8 lcp_get_del_static_on_link_down (void);
 void lcp_set_del_dynamic_on_link_down (u8 is_del);
 u8 lcp_get_del_dynamic_on_link_down (void);

+extern vlib_node_registration_t lip_punt_node;
+
 #endif

 /*
diff --git a/src/plugins/linux-cp/lcp_router.c
b/src/plugins/linux-cp/lcp_router.c
index 3534b597e..1f2c126d9 100644
--- a/src/plugins/linux-cp/lcp_router.c
+++ b/src/plugins/linux-cp/lcp_router.c
@@ -37,6 +37,7 @@
 #include 

 #include "lcp_nl_evpn.h"
+#include "vnet/osi/osi.h"

 typedef struct lcp_router_table_t_
 {
@@ -1416,6 +1417,8 @@ lcp_router_init (vlib_main_t *vm)
   lcp_rt_fib_src_dynamic = fib_source_allocate (
 "lcp-rt-dynamic", FIB_SOURCE_PRIORITY_HI + 1, FIB_SOURCE_BH_API);

+  osi_register_input_protocol( OSI_PROTOCOL_isis, lip_punt_node.index);
+
   return (NULL);
 }

It compiles, but I haven't checked anything since I'm too lazy to configure
ISIS.

On Mon, 23 Jan 2023 at 18:14, Pim van Pelt via lists.fd.io  wrote:

> Hoi,
>
> I would suggest not matching (only) MAC but (foremost) the ethertype, and
> then punting those packets into the TAP, take a look at VLIB_REGISTER_NODE
> (lip_punt_node) in src/plugins/linux-cp/lcp_node.c, contributions are
> welcome.
>
> groet,
> Pim
>
> On Mon, Jan 23, 2023 at 6:06 PM  wrote:
>
>> Hoi Pim,
>>
>> As for distinguishing  IS-IS packets, I think that should not be really
>> difficult,  it's just all the packets with specific DST MACs:
>> 09:00:2b:00:00:05, 09:00:2b:00:00:14,09:00:2b:00:00:15.
>> It's hard to imagine situation when they are needed to be processed by
>> DataPlane.
>>
>>
>>
>
> --
> Pim van Pelt 
> PBVP1-RIPE - http://www.ipng.nl/
>
> 
>
>

-- 
Best regards
Stanislav Zaikin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22494): https://lists.fd.io/g/vpp-dev/message/22494
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP 22.10 : VCL not accepting UDP connections

2023-01-23 Thread Florin Coras
Hi Chinmaya, 

Given that you’re getting packets in the listener’s rx fifo, I suspect the 
request to make it a connected listener didn’t work. We’ve had a number of 
changes in vcl/session layer so hard to say what exactly might be affecting 
your app. 

Just did an iperf udp test on master and everything seems to be fine. Maybe try 
it with your current vpp version to make sure that everything is okay or try 
running the udp iperf make test [2]

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf#UDP_testing
[2] https://git.fd.io/vpp/tree/test/asf/test_vcl.py#n978

> On Jan 23, 2023, at 10:04 AM, Chinmaya Aggarwal  
> wrote:
> 
> Hi,
> 
> We are using connected socket (setting VPPCOM_ATTR_SET_CONNECTED) but still 
> facing this issue. Has something changed between VPP v21.06 and the new 
> release for the connected udp socket?
> 
> Thanks and Regards,
> Chinmaya Agarwal. 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22493): https://lists.fd.io/g/vpp-dev/message/22493
Mute This Topic: https://lists.fd.io/mt/96363933/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP 22.10 : VCL not accepting UDP connections

2023-01-23 Thread Chinmaya Aggarwal
Hi,

We are using connected socket (setting VPPCOM_ATTR_SET_CONNECTED) but still 
facing this issue. Has something changed between VPP v21.06 and the new release 
for the connected udp socket?

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22492): https://lists.fd.io/g/vpp-dev/message/22492
Mute This Topic: https://lists.fd.io/mt/96363933/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-23 Thread Pim van Pelt via lists.fd.io
Hoi,

I would suggest not matching (only) MAC but (foremost) the ethertype, and
then punting those packets into the TAP, take a look at VLIB_REGISTER_NODE
(lip_punt_node) in src/plugins/linux-cp/lcp_node.c, contributions are
welcome.

groet,
Pim

On Mon, Jan 23, 2023 at 6:06 PM  wrote:

> Hoi Pim,
>
> As for distinguishing  IS-IS packets, I think that should not be really
> difficult,  it's just all the packets with specific DST MACs:
> 09:00:2b:00:00:05, 09:00:2b:00:00:14,09:00:2b:00:00:15.
> It's hard to imagine situation when they are needed to be processed by
> DataPlane.
> 
>
>

-- 
Pim van Pelt 
PBVP1-RIPE - http://www.ipng.nl/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22491): https://lists.fd.io/g/vpp-dev/message/22491
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-23 Thread agv100
Hoi Pim,

As for distinguishing  IS-IS packets, I think that should not be really 
difficult,  it's just all the packets with specific DST MACs: 
09:00:2b:00:00:05, 09:00:2b:00:00:14,09:00:2b:00:00:15.
It's hard to imagine situation when they are needed to be processed by 
DataPlane.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22490): https://lists.fd.io/g/vpp-dev/message/22490
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-23 Thread Pim van Pelt via lists.fd.io
Hoi,

Linux CP supports  ARP, IPv4 and IPv6. ISIS uses its its own ethertype, as
do other protocols (like LLDP for example). Those will not be punted into
the TAP by the plugin (and it's difficult to uniquely identify the ethernet
frames that should be punted as compared to being handled entirely in the
dataplane).

groet,
Pim


On Mon, Jan 23, 2023 at 4:36 PM  wrote:

> Dear VPP community,
>
> I'm trying to set up IS-IS neighborship with node running VPP22.10 + LCP
> plugin + FRR as control plane software, with no results.
>
> What I can see, looks like VPP does not pass IIH packet between network
> and TAP interface, both directions.
> On node running VPP, when tcpdumping host TAP interface I see outgoing
> IS-IS IIHs:
> 15:12:27.195439 3c:ec:ef:5f:77:8f > 09:00:2b:00:00:05, 802.3, length 1500:
> LLC, dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI
> NLPID IS-IS (0x83): p2p IIH, src-id ..0001, length 1497
> They are not appears on opposite node (it runs frr/isisd without VPP).
> Only outgoing IIH packets are seen.
> 15:29:13.192912 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500:
> LLC, dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI
> NLPID IS-IS (0x83): p2p IIH, src-id ..0002, length 1497
> 15:29:15.942959 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500:
> LLC, dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI
> NLPID IS-IS (0x83): p2p IIH, src-id ..0002, length 1497
>
> Meanwhile, IP connectivity between the nodes exist. Here you can see ICMP
> exchane, as we can see it on TAP interface of VPP host
> 15:24:15.169021 3c:ec:ef:5f:77:8f > 3c:ec:ef:5f:78:7a, ethertype IPv4
> (0x0800), length 98: 10.114.1.1 > 10.114.1.100: ICMP echo request, id
> 144, seq 12, length 64
> 15:24:15.169275 3c:ec:ef:5f:78:7a > 3c:ec:ef:5f:77:8f, ethertype IPv4
> (0x0800), length 98: 10.114.1.100 > 10.114.1.1: ICMP echo reply, id 144,
> seq 12, length 64
> 15:24:15.329025 3c:ec:ef:5f:77:8f > 3c:ec:ef:5f:78:7a, ethertype IPv4
> (0x0800), length 98: 10.114.1.1 > 10.114.1.100: ICMP echo request, id
> 122, seq 61503, length 64
> 15:24:15.329304 3c:ec:ef:5f:78:7a > 3c:ec:ef:5f:77:8f, ethertype IPv4
> (0x0800), length 98: 10.114.1.100 > 10.114.1.1: ICMP echo reply, id 122,
> seq 61503, length 64
>
> OSPF neighborship also can be established, so problem is IS-IS related.
> tn3# show ipv6 ospf6 neighbor
> Neighbor ID PriDeadTimeState/IfState Duration
> I/F[State]
> 20.20.20.1100:00:38 Full/DR  00:07:21
> Ten0.1914[BDR]
> tn3#
>
> What I found, show node counters says osi-input unknown osi protocol
> increasing.
>
>Count  Node
> Reason   Severity
> 84 lldp-inputlldp packets received on
> disabled i   error
>   4364 dpdk-input  no
> errorerror
> 20 arp-reply   ARP replies
> sentinfo
>  9 arp-reply IP4 source address matches
> local in   error
> 19 arp-reply ARP request IP4 source
> address lear   info
> 43arp-disabled   ARP
> Disabled  error
>   1252 osi-input unknown osi
> protocol  error
>  4 ip6-inputip6 source lookup
> miss error
> 19ip6-local-hop-by-hop   Unknown protocol ip6 local
> h-b-h pa   error
> 10 ip4-localip4 source lookup
> miss error
>  4   ip6-icmp-input  neighbor solicitations for
> unknownerror
>  4   ip6-icmp-input  neighbor advertisements
> sent  info
>106   ip6-icmp-input   neighbor discovery not
> configurederror
> 42 snap-input unknown oui/snap
> protocolerror
> 49   ethernet-input unknown ethernet
> type  error
> 623375   ethernet-input  unknown
> vlan  error
>  1   ethernet-input   subinterface
> downerror
>
> On the other hand, I can see IS-IS protocol in src/vnet/osi/osi.h
>
>
> #define foreach_osi_protocol\
>   _ (null, 0x0) \
>   _ (x_29, 0x01)\
>   _ (x_633, 0x03)   \
>   _ (q_931, 0x08)   \
>   _ (q_933, 0x08)   \
>   _ (q_2931, 0x09)  \
>   _ (q_2119, 0x0c)  \
>   _ (snap, 0x80)\
>   _ (clnp, 0x81)\
>   _ (esis, 0x82)\
>   _ (isis, 0x83) 

[vpp-dev] VPP LCP: IS-IS does not work

2023-01-23 Thread agv100
Dear VPP community,

I'm trying to set up IS-IS neighborship with node running VPP22.10 + LCP plugin 
+ FRR as control plane software, with no results.

What I can see, looks like VPP does not pass IIH packet between network and TAP 
interface, both directions.
On node running VPP, when tcpdumping host TAP interface I see outgoing IS-IS 
IIHs:
15:12:27.195439 3c:ec:ef:5f:77:8f > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497
They are not appears on opposite node (it runs frr/isisd without VPP).
Only outgoing IIH packets are seen.
15:29:13.192912 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0002, length 1497
15:29:15.942959 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0002, length 1497

Meanwhile, IP connectivity between the nodes exist. Here you can see ICMP 
exchane, as we can see it on TAP interface of VPP host
15:24:15.169021 3c:ec:ef:5f:77:8f > 3c:ec:ef:5f:78:7a, ethertype IPv4 (0x0800), 
length 98: 10.114.1.1 > 10.114.1.100: ICMP echo request, id 144, seq 12, length 
64
15:24:15.169275 3c:ec:ef:5f:78:7a > 3c:ec:ef:5f:77:8f, ethertype IPv4 (0x0800), 
length 98: 10.114.1.100 > 10.114.1.1: ICMP echo reply, id 144, seq 12, length 64
15:24:15.329025 3c:ec:ef:5f:77:8f > 3c:ec:ef:5f:78:7a, ethertype IPv4 (0x0800), 
length 98: 10.114.1.1 > 10.114.1.100: ICMP echo request, id 122, seq 61503, 
length 64
15:24:15.329304 3c:ec:ef:5f:78:7a > 3c:ec:ef:5f:77:8f, ethertype IPv4 (0x0800), 
length 98: 10.114.1.100 > 10.114.1.1: ICMP echo reply, id 122, seq 61503, 
length 64

OSPF neighborship also can be established, so problem is IS-IS related.
tn3# show ipv6 ospf6 neighbor
Neighbor ID Pri    DeadTime    State/IfState Duration I/F[State]
20.20.20.1    1    00:00:38 Full/DR  00:07:21 Ten0.1914[BDR]
tn3#

What I found, show node counters says osi-input unknown osi protocol increasing.

Count  Node  Reason   
Severity
84 lldp-input    lldp packets received on disabled i   error
4364 dpdk-input  no error    
error
20 arp-reply   ARP replies sent    info
9 arp-reply IP4 source address matches local in   error
19 arp-reply ARP request IP4 source address lear   info
43    arp-disabled   ARP Disabled  error
1252 osi-input unknown osi protocol  
error
4 ip6-input    ip6 source lookup miss error
19    ip6-local-hop-by-hop   Unknown protocol ip6 local h-b-h pa   error
10 ip4-local    ip4 source lookup miss error
4   ip6-icmp-input  neighbor solicitations for unknown    error
4   ip6-icmp-input  neighbor advertisements sent  info
106   ip6-icmp-input   neighbor discovery not configured    
error
42 snap-input unknown oui/snap protocol    error
49   ethernet-input unknown ethernet type  error
623375   ethernet-input  unknown vlan  
error
1   ethernet-input   subinterface down    error

On the other hand, I can see IS-IS protocol in src/vnet/osi/osi.h

#define foreach_osi_protocol    \
_ (null, 0x0) \
_ (x_29, 0x01)    \
_ (x_633, 0x03)   \
_ (q_931, 0x08)   \
_ (q_933, 0x08)   \
_ (q_2931, 0x09)  \
_ (q_2119, 0x0c)  \
_ (snap, 0x80)    \
_ (clnp, 0x81)    \
_ (esis, 0x82)    \
_ (isis, 0x83)    \
_ (idrp, 0x85)    \
_ (x25_esis, 0x8a)    \
_ (iso10030, 0x8c)    \
_ (iso11577, 0x8d)    \
_ (ip6, 0x8e) \
_ (compressed, 0xb0)  \
_ (sndcf, 0xc1)   \
_ (ip4, 0xcc) \
_ (ppp, 0xcf)

So protocol should not be "unknown".

Any ideas where I need to look at to fix the issue with IS-IS?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22488): 

Re: [vpp-dev] VPP 22.10 : VCL not accepting UDP connections

2023-01-20 Thread Florin Coras
Hi Chinmaya, 

Given that data is written to the listener’s fifo, I’ll guess vpp thinks it’s 
using non-connected udp sessions. Since you expect accepts to be coming, 
probably you’re missing an vppcom_session_attr VPPCOM_ATTR_SET_CONNECTED on the 
listener. See for instance here [1]. It could also be that the vcl lib your app 
is linked against is out of sync with vpp. 

Let me know if that solves the issue.

Regards,
Florin


[1] https://git.fd.io/vpp/tree/src/plugins/hs_apps/vcl/vcl_test_protos.c#n154

> On Jan 19, 2023, at 1:13 PM, Chinmaya Aggarwal  
> wrote:
> 
> Hi,
>  
> We re-compiled VPP 22.10 by cherry picking the below commits:-
>  
> udp: fix tx handling of non-connected sessions : 
> 15952b261f92959ca14cf6679efc318c12e90de6
> udp: support for disabling tx csum : f8ee39ff715ec713045af69da465ba4da8248212
> udp: explicit udp output node This allows for custom next node selection on 
> output. : 8c1be054b90f113aef3ae27b52d7389271ce91c3
>  
> But we are still facing the same issue that VCL is not able to accept UDP 
> connections and we are seeing rx full in "show session verbose".
>  
> Is there anything else that we might be missing out on or can try?
> 
> Thanks and Regards,
> Chinmaya Agarwal.
> 
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22487): https://lists.fd.io/g/vpp-dev/message/22487
Mute This Topic: https://lists.fd.io/mt/96363933/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP 22.10 : VCL not accepting UDP connections

2023-01-19 Thread Chinmaya Aggarwal
Hi,

We re-compiled VPP 22.10 by cherry picking the below commits:-

udp: fix tx handling of non-connected sessions : 
15952b261f92959ca14cf6679efc318c12e90de6
udp: support for disabling tx csum : f8ee39ff715ec713045af69da465ba4da8248212
udp: explicit udp output node This allows for custom next node selection on 
output. : 8c1be054b90f113aef3ae27b52d7389271ce91c3

But we are still facing the same issue that VCL is not able to accept UDP 
connections and we are seeing rx full in "show session verbose".

Is there anything else that we might be missing out on or can try?

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22485): https://lists.fd.io/g/vpp-dev/message/22485
Mute This Topic: https://lists.fd.io/mt/96363933/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP: LX2160a failed to start - dpaa2_dev_rx_queue_setup

2023-01-19 Thread agv100
Hello,

I was trying to check "vanilla" VPP, without integration of patches from NXP: 
it works well on LX2160A board with PCIE NICs (I'm using IGBs), but fails to 
start with DPAA2 ports enabled (NXP-patched version starts with DPAA2, but very 
unstable with both DPAA2 and PCIE ethernets).

"Vanilla" VPP was built on board natively, by simple VPP build instructions, 
either directly on in vagrant.
Different versions were checked:
- 21.06
- 22.10
- 23.02 RC as-is, and with patch to bump DPDK 22.11, all are with same results, 
PCIE NICs only configuration works well and stable, crashes on initialization 
of DPAA2 devices.

So, when DPAA2 ports are enabled, and DPRC=dprc.X pointing to container with 
interfaces is set, we see following on start of standard vanilla VPP (traces 
are identical for versions above). Any ideas where to look to fix an issue?

(gdb) run -c /etc/vpp/startup.conf
Starting program: 
/home/abramov/vpp-stable/build-root/install-vpp_debug-native/vpp/bin/vpp -c 
/etc/vpp/startup.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[New Thread 0x716e6140 (LWP 1679)]
[New Thread 0x70ee5140 (LWP 1680)]
[New Thread 0x6bfff140 (LWP 1681)]

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
dpaa2_dev_rx_queue_setup (dev=0x765c5c80 , rx_queue_id=0, 
nb_rx_desc=1024, socket_id=0, rx_conf=0x719db3c0, mb_pool=0x170cac180)
at ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c:675
675 ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c: No such file or directory.
(gdb) bt
#0  dpaa2_dev_rx_queue_setup (dev=0x765c5c80 , 
rx_queue_id=0, nb_rx_desc=1024, socket_id=0, rx_conf=0x719db3c0, 
mb_pool=0x170cac180)
at ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c:675
#1  0x74eaf518 in rte_eth_rx_queue_setup (port_id=0, rx_queue_id=0, 
nb_rx_desc=1024, socket_id=0, rx_conf=0x719db478, mp=0x170cac180)
at ../src-dpdk/lib/librte_ethdev/rte_ethdev.c:2115
#2  0x7596b020 in dpdk_device_setup (xd=0x7c171500) at 
/root/vpp-stable2/src/plugins/dpdk/device/common.c:133
#3  0x7598b824 in dpdk_lib_init (dm=0x765b1220 ) at 
/root/vpp-stable2/src/plugins/dpdk/device/init.c:805
#4  0x75989874 in dpdk_process (vm=0x76c00680, rt=0x7a9da280, 
f=0x0) at /root/vpp-stable2/src/plugins/dpdk/device/init.c:1840
#5  0xf727aaf4 in vlib_process_bootstrap (_a=281472624744504) at 
/root/vpp-stable2/src/vlib/main.c:1284
#6  0xf7121348 in clib_calljmp () at 
/root/vpp-stable2/src/vppinfra/longjmp.S:809
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)

(gdb) run -c /etc/vpp/startup.conf
Starting program: 
/home/abramov/vpp-stable/build-root/install-vpp_debug-native/vpp/bin/vpp -c 
/etc/vpp/startup.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[New Thread 0x716e6140 (LWP 15156)]
[New Thread 0x70ee5140 (LWP 15157)]
[New Thread 0x6bfff140 (LWP 15158)]

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
dpaa2_dev_rx_queue_setup (dev=0x765c5c80 , rx_queue_id=0, 
nb_rx_desc=1024, socket_id=0, rx_conf=0x719db3c0, mb_pool=0x170cac180)
at ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c:675
675 ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c: No such file or directory.
(gdb) bt full
#0  dpaa2_dev_rx_queue_setup (dev=0x765c5c80 , 
rx_queue_id=0, nb_rx_desc=1024, socket_id=0, rx_conf=0x719db3c0, 
mb_pool=0x170cac180)
at ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c:675
priv = 0x2184295180
dpni = 0x2184295080
dpaa2_q = 0x719db3f8
cfg = {destination = {id = 49536, type = DPNI_DEST_DPIO, hold_active = 0 
'\000', priority = 179 '\263'}, user_context = 281472643297432, flc = {value = 
5035480,
stash_control = 128 '\200'}, cgid = 1024}
options = 0 '\000'
flow_id = 113 'q'
bpid = 65535
i = 65535
ret = 1961568588
__func__ = "dpaa2_dev_rx_queue_setup"
#1  0x74eaf518 in rte_eth_rx_queue_setup (port_id=0, rx_queue_id=0, 
nb_rx_desc=1024, socket_id=0, rx_conf=0x719db478, mp=0x170cac180)
at ../src-dpdk/lib/librte_ethdev/rte_ethdev.c:2115
ret = 0
mbp_buf_size = 2176
dev = 0x765c5c80 
dev_info = {device = 0x7edc90, driver_name = 0x75a2c2a8 "net_dpaa2", 
if_index = 0, min_mtu = 68, max_mtu = 65535, dev_flags = 0x2184297e9c,
min_rx_bufsize = 512, max_rx_pktlen = 10240, max_lro_pkt_size = 0, 
max_rx_queues = 128, max_tx_queues = 16, max_mac_addrs = 16, max_hash_mac_addrs 
= 0,
max_vfs = 0, max_vmdq_pools = 16, rx_seg_capa = {multi_pools = 0, 
offset_allowed = 0, offset_align_log2 = 0, max_nseg = 0, reserved = 0},
rx_offload_capa = 944719, tx_offload_capa = 114847, rx_queue_offload_capa = 0, 
tx_queue_offload_capa = 0, reta_size = 0, hash_key_size = 0 '\000',
flow_type_rss_offloads = 8590196732, default_rxconf = {rx_thresh = {pthresh = 0 
'\000', hthresh = 0 '\000', wthresh = 0 '\000'}, rx_free_thresh = 0,
rx_drop_en = 0 '\000', 

Re: [vpp-dev] VPP 22.10 : VCL not accepting UDP connections

2023-01-18 Thread Florin Coras
Hi Chinmaya, 

Are you by chance using 23.02rc0, as opposed to 22.10, in combination with 
non-connected udp listeners? If yes, could you try this fix [1] or vpp latest 
to check if the issue still persists? 

Regards,
Florin

[1] https://gerrit.fd.io/r/c/vpp/+/37842

> On Jan 18, 2023, at 12:59 PM, Chinmaya Aggarwal  
> wrote:
> 
> Hi,
>  
> We are testing VCL in VPP 22.10 and facing the issue that VCL is not able to 
> accept UDP connections and we are seeing rx full in "show session verbose" 
> command:--
>  
> vpp# show session verbose
> Connection  State  
> Rx-f  Tx-f
> [0:0][U] 2001:5b0::501:b883:31f:29e:9881:9915->:::0 LISTEN 
> 3994945   0
> [0:1][U] 2001:5b0::501:b883:31f:19e:9881:9915->:::0 LISTEN 0  
>0
> Thread 0: active sessions 2
> Thread 1: no sessions
> Thread 2: no sessions
> Thread 3: no sessions
> Thread 4: no sessions
> vpp#
>  
>  
> Below is the relevant configuration in:- 
> startup.conf
> session {
> enable
> evt_qs_memfd_seg
> use-app-socket-api
> segment-baseva 0x20
> }
> #socksvr { socket-name /run/vpp/vcl.sock}
> socksvr { default }
>  
>  
> vcl.conf
> vcl {
>   rx-fifo-size 400
>   tx-fifo-size 400
>   app-scope-local
>   app-scope-global
>   app-socket-api  /var/run/vpp/app_ns_sockets/default
> }
>  
> The same configuration is working fine in VPP v21.06. Has anything changed in 
> 22.10 or are we missing something here?
> 
> Thanks and Regards,
> Chinmaya Agarwal.
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22482): https://lists.fd.io/g/vpp-dev/message/22482
Mute This Topic: https://lists.fd.io/mt/96363933/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 22.10 : VCL not accepting UDP connections

2023-01-18 Thread Chinmaya Aggarwal
Hi,

We are testing VCL in VPP 22.10 and facing the issue that VCL is not able to 
accept UDP connections and we are seeing rx full in "show session verbose" 
command:--

vpp# show session verbose
Connection                                                  State          Rx-f 
     Tx-f
[0:0][U] 2001:5b0::501:b883:31f:29e:9881:9915->:::0     LISTEN         
3994945   0
[0:1][U] 2001:5b0::501:b883:31f:19e:9881:9915->:::0     LISTEN         0    
     0
Thread 0: active sessions 2
Thread 1: no sessions
Thread 2: no sessions
Thread 3: no sessions
Thread 4: no sessions
vpp#

Below is the relevant configuration in:-
startup.conf
session {
enable
evt_qs_memfd_seg
use-app-socket-api
segment-baseva 0x20
}
#socksvr { socket-name /run/vpp/vcl.sock}
socksvr { default }

vcl.conf
vcl {
rx-fifo-size 400
tx-fifo-size 400
app-scope-local
app-scope-global
app-socket-api  /var/run/vpp/app_ns_sockets/default
}

The same configuration is working fine in VPP v21.06. Has anything changed in 
22.10 or are we missing something here?

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22481): https://lists.fd.io/g/vpp-dev/message/22481
Mute This Topic: https://lists.fd.io/mt/96363933/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 23.02 RC1 milestone is complete!

2023-01-18 Thread Andrew Yourtchenko
Hi all,

23.02 RC1 is done, the master branch is open for all commits, the
stable/2302 branch is created and is open for the cherry-picks of the bug
fixes - which would need to first be merged into master branch.

The deadline for the fixes is RC2 milestone, which is 3 weeks from now, as
per the release plan ([0]). After RC2 in this release cycle I will be a bit
more strict (since we had some hiccups in the previous releases coinciding
with the late fixes) and will limit the allowed fixes to the ones required
by CSIT tests. So - please manage the time, and include me in the review
communication liberally if you feel it can be useful, and as early as
possible.

In the meantime the RC1 artefacts are available at the usual place for your
testing - https://packagecloud.io/fdio/2302

Thanks a lot and onwards to RC2!

--a /* your friendly 23.02 release manager */

[0] https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_23.02

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22480): https://lists.fd.io/g/vpp-dev/message/22480
Mute This Topic: https://lists.fd.io/mt/96359383/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 23.02 RC1 milestone tomorrow 18 Jan 2023 12:00 UTC

2023-01-17 Thread Andrew Yourtchenko
Hi all,

Just a kind reminder that tomorrow at 12:00 UTC i will create the branch 
stable/2203, in preparation for the upcoming 23.02 release.

--a /* your friendly 23.02 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22473): https://lists.fd.io/g/vpp-dev/message/22473
Mute This Topic: https://lists.fd.io/mt/96333823/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes with vlib_worker_thread_barrier_syn_int:thread deadlock

2023-01-03 Thread Benoit Ganne (bganne) via lists.fd.io
The issue seems to be that the main thread wants to reply to an ARP, but it 
timeouts on the worker barrier:
 1) why the worker does not yield to the main thread in a timely manner: 
workers should always complete processing in less than 1s. You can try to use 
elog to identify which nodes takes too long: 
https://s3-docs.fd.io/vpp/23.02/developer/corefeatures/eventviewer.html
 2) why does the main thread is receiving ARP to begin with? Can you share the 
output of 'show int rx'?

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Mechthild
> Buescher via lists.fd.io
> Sent: Thursday, December 22, 2022 12:25
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP crashes with
> vlib_worker_thread_barrier_syn_int:thread deadlock
> 
> Hi Piotr,
> 
> 
> 
> Thanks for your hint. We could nail down the problem a bit and would like
> to ask for your suggestion on how to solve it.
> 
> 
> 
> The problem occurs when there is traffic received on a DPDK-interface and
> shall be forwarded to a host interface. Here are the snippets of the
> configuration which we think is relevant (note, it’s a different setup
> compared to previous email, here we have one socket and less cpus):
> 
> 
> 
> We configure CPU’s 2,17,18 to be isolated:
> 
> # cat /proc/cmdline
> 
> BOOT_IMAGE=/vmlinuz-5.3.18-150300.59.76-default root=UUID=439b3b24-9c1d-
> 4b6f-b024-539b50cb7480 rootflags=subvol=@ intel_iommu=on iommu=pt
> intel_idle.max_cstate=0 processor.max_cstate=0 idle=poll
> intel_pstate=disable isolcpus=2,3,4,5,6,17,18,19,20,21,22 nohz=on
> nohz_full=2,3,4,5,6,17,18,19,20,21,22
> rcu_nocbs=2,3,4,5,6,17,18,19,20,21,22 rcu_nocb_poll
> irqaffinity=0,1,7,8,9,10,11,12,13,14,15,16,23,24,25,26,27,28,29,30,31
> hugepagesz=2M hugepages=2048 hugepagesz=1G hugepages=4
> default_hugepagesz=2M transparent_hugepage=never nosoftlookup
> nmi_watchdog=0 tsc=reliable hpet=disable clocksource=tsc skew_tick=1
> mce=ignore_ce splash console=ttyS0,115200 psi=1 audit=1 apparmor=1
> security=apparmor
> 
> 
> 
> And we use those isolated CPU’s for the workers and one non-isolated CPU
> for the main-thread:
> 
> cpu {
> 
> main-core 1
> 
> corelist-workers 2,17,18
> 
> }
> 
> 
> 
> The relevant DPDK-interface is Radio-0:
> 
> dpdk {
> 
> dev default {
> 
> num-rx-queues 3
> 
> }
> 
> 
> 
> uio-driver vfio-pci
> 
> 
> 
> dev :17:00.1 {
> 
> name Radio-0
> 
> }
> 
> :
> 
> }
> 
> 
> 
> And then we have the following configuration:
> 
> set interface state Radio-0 up
> 
> create host-interface name Vpp2Host
> 
> set interface state host-Vpp2Host up
> 
> set interface rx-placement host-Vpp2Host main
> 
> create sub-interfaces Radio-0 3092
> 
> set interface state Radio-0.3092 up
> 
> create sub-interfaces host-Vpp2Host 3092
> 
> set interface state host-Vpp2Host.3092 up
> 
> set interface l2 bridge Radio-0.3092 3092
> 
> set interface l2 bridge host-Vpp2Host.3092 3092
> 
> 
> 
> This means, we receive traffic on a DPDK interface and try to forward it
> via a L2 bridge to the host. The DPDK interface is on an isolated CPU
> while the host interface is on a non-isolated CPU. My suspicion is that
> this is the problem – do you agree? Do you have any idea how we can solve
> this? The Radio-0 interface is used for OAM via vlan 3092 (this is what
> you see in the above configuration) as well as for traffic (untagged),
> that’s why we want to have it on an isolated CPU.
> 
> 
> 
> Thank you for your support,
> 
> 
> 
> BR/Mechthild
> 
> 
> 
> From: vpp-dev@lists.fd.io  on behalf of Bronowski,
> PiotrX via lists.fd.io 
> Date: Wednesday, 21. December 2022 at 17:24
> To: vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] VPP crashes with
> vlib_worker_thread_barrier_syn_int:thread deadlock
> 
> Hi Mechthild,
> 
> 
> 
> Your issue is caused by the main thread waiting too long for a worker to
> finish. You may examine these lines in src/vlib/threads.h:
> 
> 
> 
> 171 /* Check for a barrier sync request every 30ms */
> 
> 172 #define BARRIER_SYNC_DELAY (0.03)
> 
> 173
> 
> 174 #if CLIB_DEBUG > 0
> 
> 175 /* long barrier timeout, for gdb... */
> 
> 176 #define BARRIER_SYNC_TIMEOUT (600.1)
> 
> 177 #else
> 
> 178 #define BARRIER_SYNC_TIMEOUT (1.0)
> 
> 179 #endif
> 
> 
> 
> Your restart is caused by the timeout defined in these lines. You may
> increase it to investigate your issue (of course it is not a fix). After
> increasing timeout and being in interact

Re: [vpp-dev] VPP crashes with vlib_worker_thread_barrier_syn_int:thread deadlock

2022-12-28 Thread Bronowski, PiotrX
Hi Mechthild,
Unfortunately, my knowledge is not sufficient regarding identification of the 
bottleneck in the details of your setup.
Wish you best luck,
Piotr

From: vpp-dev@lists.fd.io  On Behalf Of Mechthild Buescher 
via lists.fd.io
Sent: Thursday, December 22, 2022 12:25 PM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP crashes with 
vlib_worker_thread_barrier_syn_int:thread deadlock

Hi Piotr,

Thanks for your hint. We could nail down the problem a bit and would like to 
ask for your suggestion on how to solve it.
The problem occurs when there is traffic received on a DPDK-interface and shall 
be forwarded to a host interface. Here are the snippets of the configuration 
which we think is relevant (note, it's a different setup compared to previous 
email, here we have one socket and less cpus):
We configure CPU's 2,17,18 to be isolated:
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-5.3.18-150300.59.76-default 
root=UUID=439b3b24-9c1d-4b6f-b024-539b50cb7480 rootflags=subvol=@ 
intel_iommu=on iommu=pt intel_idle.max_cstate=0 processor.max_cstate=0 
idle=poll intel_pstate=disable isolcpus=2,3,4,5,6,17,18,19,20,21,22 nohz=on 
nohz_full=2,3,4,5,6,17,18,19,20,21,22 rcu_nocbs=2,3,4,5,6,17,18,19,20,21,22 
rcu_nocb_poll 
irqaffinity=0,1,7,8,9,10,11,12,13,14,15,16,23,24,25,26,27,28,29,30,31 
hugepagesz=2M hugepages=2048 hugepagesz=1G hugepages=4 default_hugepagesz=2M 
transparent_hugepage=never nosoftlookup nmi_watchdog=0 tsc=reliable 
hpet=disable clocksource=tsc skew_tick=1 mce=ignore_ce splash 
console=ttyS0,115200 psi=1 audit=1 apparmor=1 security=apparmor
And we use those isolated CPU's for the workers and one non-isolated CPU for 
the main-thread:
cpu {
main-core 1
corelist-workers 2,17,18
}

The relevant DPDK-interface is Radio-0:
dpdk {
dev default {
num-rx-queues 3
}

uio-driver vfio-pci

dev :17:00.1 {
name Radio-0
}
:
}

And then we have the following configuration:
set interface state Radio-0 up
create host-interface name Vpp2Host
set interface state host-Vpp2Host up
set interface rx-placement host-Vpp2Host main
create sub-interfaces Radio-0 3092
set interface state Radio-0.3092 up
create sub-interfaces host-Vpp2Host 3092
set interface state host-Vpp2Host.3092 up
set interface l2 bridge Radio-0.3092 3092
set interface l2 bridge host-Vpp2Host.3092 3092
This means, we receive traffic on a DPDK interface and try to forward it via a 
L2 bridge to the host. The DPDK interface is on an isolated CPU while the host 
interface is on a non-isolated CPU. My suspicion is that this is the problem - 
do you agree? Do you have any idea how we can solve this? The Radio-0 interface 
is used for OAM via vlan 3092 (this is what you see in the above configuration) 
as well as for traffic (untagged), that's why we want to have it on an isolated 
CPU.

Thank you for your support,

BR/Mechthild

From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> on behalf of Bronowski, 
PiotrX via lists.fd.io 
mailto:piotrx.bronowski=intel@lists.fd.io>>
Date: Wednesday, 21. December 2022 at 17:24
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] VPP crashes with 
vlib_worker_thread_barrier_syn_int:thread deadlock
Hi Mechthild,

Your issue is caused by the main thread waiting too long for a worker to 
finish. You may examine these lines in src/vlib/threads.h:

171 /* Check for a barrier sync request every 30ms */
172 #define BARRIER_SYNC_DELAY (0.03)
173
174 #if CLIB_DEBUG > 0
175 /* long barrier timeout, for gdb... */
176 #define BARRIER_SYNC_TIMEOUT (600.1)
177 #else
178 #define BARRIER_SYNC_TIMEOUT (1.0)
179 #endif

Your restart is caused by the timeout defined in these lines. You may increase 
it to investigate your issue (of course it is not a fix). After increasing 
timeout and being in interactive mode you can issue command "show run" it will 
tell you in which node you are spending most of your time and potentially 
identify source of your problem. Alternatively, you may use perf tool for that 
task.
BR,
Piotr

From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Mechthild 
Buescher via lists.fd.io
Sent: Wednesday, December 21, 2022 3:24 PM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] VPP crashes with vlib_worker_thread_barrier_syn_int:thread 
deadlock

Hi,

We have a severe problem with VPP - it's cyclic restarting due to the following 
error:

Dec 21 09:46:32 843V0N3 vpp[3846]: vlib_worker_thread_barrier_sync_int: worker 
thread deadlock

This happens on both servers of the setup and it cannot recover. Can you please 
help us to debug this issue?


VPP version:
# vppctl show version
vpp v22.02.0-1~g0d1b46707-dirty built by suse on SUSE at 2022-05-02T09:46:05

which is a built of version 22.02.-1 on SLES 15 S

Re: [SUSPECTED SPAM] [vpp-dev] VPP crashes on LX2160A platform

2022-12-22 Thread agv100
Hello,

The current build (22.10, cross-compiled via SolidRun toolchain) crashes 
without dependency to optimization level, and, with debug enabled. shows the 
following:

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0xf6d7caac in __GI_abort () at abort.c:79
#2  0x00406fe4 in os_panic () at /work/build/vpp/src/vpp/vnet/main.c:416
#3  0xf6fa6514 in debugger () at /work/build/vpp/src/vppinfra/error.c:84
#4  0xf6fa6874 in _clib_error (how_to_die=2, 
function_name=0xf7173978 <__FUNCTION__.32141> 
"vlib_buffer_validate_alloc_free", line_number=333,
fmt=0xf7173438 "%s %U buffer 0x%x") at 
/work/build/vpp/src/vppinfra/error.c:143
#5  0xf70c1218 in vlib_buffer_validate_alloc_free (vm=0xb6d5c740, 
buffers=0xb4bac810, n_buffers=1, expected_state=VLIB_BUFFER_KNOWN_ALLOCATED)
at /work/build/vpp/src/vlib/buffer.c:332
#6  0xf716afc4 in vlib_buffer_pool_put (vm=0xb6d5c740, 
buffer_pool_index=0 '\000', buffers=0xb4bac810, n_buffers=1)
at /work/build/vpp/src/vlib/buffer_funcs.h:731
#7  0xf716b75c in vlib_buffer_free_inline (vm=0xb6d5c740, 
buffers=0xb88bd1d4, n_buffers=0, maybe_next=1) at 
/work/build/vpp/src/vlib/buffer_funcs.h:917
#8  0xf716b7c8 in vlib_buffer_free (vm=0xb6d5c740, 
buffers=0xb88bd1d0, n_buffers=1) at 
/work/build/vpp/src/vlib/buffer_funcs.h:936
#9  0xf716c424 in process_drop_punt (vm=0xb6d5c740, 
node=0xb7844300, frame=0xb88bd1c0, disposition=ERROR_DISPOSITION_DROP)
at /work/build/vpp/src/vlib/drop.c:235
#10 0xf716c4fc in error_drop_node_fn_cortexa72 (vm=0xb6d5c740, 
node=0xb7844300, frame=0xb88bd1c0) at 
/work/build/vpp/src/vlib/drop.c:251
#11 0xf70f512c in dispatch_node (vm=0xb6d5c740, 
node=0xb7844300, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0xb88bd1c0,
last_time_stamp=233164692224) at /work/build/vpp/src/vlib/main.c:960
#12 0xf70f585c in dispatch_pending_node (vm=0xb6d5c740, 
pending_frame_index=4, last_time_stamp=233164692224) at 
/work/build/vpp/src/vlib/main.c:1119
#13 0xf70f6be8 in vlib_main_or_worker_loop (vm=0xb6d5c740, 
is_main=1) at /work/build/vpp/src/vlib/main.c:1588
#14 0xf70f71ec in vlib_main_loop (vm=0xb6d5c740) at 
/work/build/vpp/src/vlib/main.c:1716
#15 0xf70f7d1c in vlib_main (vm=0xb6d5c740, input=0xb4badfc8) 
at /work/build/vpp/src/vlib/main.c:2010
#16 0xf7145044 in thread0 (arg=281473749206848) at 
/work/build/vpp/src/vlib/unix/main.c:667
#17 0xf6fb84c0 in clib_calljmp () at 
/work/build/vpp/src/vppinfra/longjmp.S:809
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22367): https://lists.fd.io/g/vpp-dev/message/22367
Mute This Topic: https://lists.fd.io/mt/95380982/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP AF_PACKET (aka host-interface) driver is now a plugin

2022-12-21 Thread Dave Wallace

Folks,

As part of the ongoing effort to move features from vpp/src/vnet to 
plugins (tracked Jira VPP-2040 [0]), the gerrit change [1] moving the 
VPP AF_PACKET (aka host-interface) driver to a plugin has just been 
merged into master.


If you use the VPP AF_PACKET driverand have the stanza 'plugin default 
{disable}' in your startup.conf (if not, you probably should consider 
it) then you'll have to add 'plugin af_packet_plugin.so {enable}' to the 
plugins stanza in your startup.conf for the AF_PACKET plugin to be 
loaded from now on.


Thanks,
-daw-

[0] https://jira.fd.io/browse/VPP-2040
[1] https://gerrit.fd.io/r/c/vpp/+/37830

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22355): https://lists.fd.io/g/vpp-dev/message/22355
Mute This Topic: https://lists.fd.io/mt/95813067/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes with vlib_worker_thread_barrier_syn_int:thread deadlock

2022-12-21 Thread Bronowski, PiotrX
Hi Mechthild,

Your issue is caused by the main thread waiting too long for a worker to 
finish. You may examine these lines in src/vlib/threads.h:

171 /* Check for a barrier sync request every 30ms */
172 #define BARRIER_SYNC_DELAY (0.03)
173
174 #if CLIB_DEBUG > 0
175 /* long barrier timeout, for gdb... */
176 #define BARRIER_SYNC_TIMEOUT (600.1)
177 #else
178 #define BARRIER_SYNC_TIMEOUT (1.0)
179 #endif

Your restart is caused by the timeout defined in these lines. You may increase 
it to investigate your issue (of course it is not a fix). After increasing 
timeout and being in interactive mode you can issue command "show run" it will 
tell you in which node you are spending most of your time and potentially 
identify source of your problem. Alternatively, you may use perf tool for that 
task.
BR,
Piotr

From: vpp-dev@lists.fd.io  On Behalf Of Mechthild Buescher 
via lists.fd.io
Sent: Wednesday, December 21, 2022 3:24 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP crashes with vlib_worker_thread_barrier_syn_int:thread 
deadlock

Hi,

We have a severe problem with VPP - it's cyclic restarting due to the following 
error:

Dec 21 09:46:32 843V0N3 vpp[3846]: vlib_worker_thread_barrier_sync_int: worker 
thread deadlock

This happens on both servers of the setup and it cannot recover. Can you please 
help us to debug this issue?


VPP version:
# vppctl show version
vpp v22.02.0-1~g0d1b46707-dirty built by suse on SUSE at 2022-05-02T09:46:05

which is a built of version 22.02.-1 on SLES 15 SP3 including the dpdk-patch 
0001-add-patch-to-disable-source-pruning-in-i40e-driver.patch

The startup.conf:
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  coredump-size unlimited
  cli-listen /run/vpp/cli.sock
  gid vpp
  startup-config /etc/vpp/vpp-static-config.txt
}

api-trace {
  on
}

api-segment {
  gid vpp
}

socksvr {
  socket-name /var/run/vpp/ic-api.sock
}

memory {
main-heap-page-size 1G
}

cpu {
main-core 2
corelist-workers 4,6,42,44,46
}

buffers {
buffers-per-numa 128000
}

dpdk {
dev default {
num-rx-queues 5
}

uio-driver vfio-pci

dev :3b:00.0 {
name Radio-0
}
dev :3b:00.1 {
name Ext-0
}
dev :5e:02.1 {
name NCIC-1-v1
}
}

plugins {
plugin default  { disable }
plugin dpdk_plugin.so   { enable }
plugin ioam_plugin.so   { enable }
plugin perfmon_plugin.so{ enable }
plugin tracedump_plugin.so  { enable }
plugin l3xc_plugin.so   { enable }
plugin ping_plugin.so   { enable }
plugin avf_plugin.so{ enable }
plugin acl_plugin.so{ enable }
plugin svs_plugin.so{ enable }
plugin vrrp_plugin.so   { enable }
plugin dhcp_plugin.so   { enable }
plugin nat_plugin.so{ enable }
plugin abf_plugin.so{ enable }
plugin lacp_plugin.so   { enable }
plugin flowprobe_plugin.so  { enable }
}

The log gives:
Dec 21 10:33:27 hostname systemd[1]: Starting Vector Packet Processing 
Process...
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + 
dpdk_devbind=/usr/local/bin/dpdk-devbind.py
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + 
VPP_CONF=/etc/vpp/ic-startup.conf
Dec 21 10:33:27 hostname ic-vpp-service.sh[64760]: ++ grep -v '#' 
/etc/vpp/ic-startup.conf
Dec 21 10:33:27 hostname ic-vpp-service.sh[64761]: ++ grep dev
Dec 21 10:33:27 hostname ic-vpp-service.sh[64762]: ++ grep -v default
Dec 21 10:33:27 hostname ic-vpp-service.sh[64763]: ++ sed 's/.*dev //'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64764]: ++ cut '-d ' -f1
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + DEVICES=':3b:00.0
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: :3b:00.1
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: :5e:02.1'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64766]: ++ grep uio-driver 
/etc/vpp/ic-startup.conf
Dec 21 10:33:27 hostname ic-vpp-service.sh[64767]: ++ sed 's/.*uio-driver //'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64768]: ++ cut '-d ' -f1
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + DPDK_DRV=vfio-pci
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + '[' --start == --stop ']'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + for dev in $DEVICES
Dec 21 10:33:27 hostname ic-vpp-service.sh[64770]: ++ 
/usr/local/bin/dpdk-devbind.py -s
Dec 21 10:33:27 hostname ic-vpp-service.sh[64771]: ++ grep :3b:00.0
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + cdev=':3b:00.0 
'\''Ethernet Controller XXV710 for 25GbE SFP28 158b'\'' if=Radio-0 drv=i40e 
unused=vfio-pci '
Dec 21 10:33:27 hostname ic-vpp-service.sh[64952]: ++ get_drv ':3b:00.0 
'\''Ethernet Controller XXV710 for 25GbE SFP28 158b'\'' if=Radio-0 drv=i40e 
unused=vfio-pci '
Dec 21 10:33:27 hostname ic-vpp-service.sh[64952]: ++ [[ -z :3b:00.0 
'Ethernet Cont

Re: [vpp-dev] VPP crashes with vlib_worker_thread_barrier_syn_int:thread deadlock

2022-12-21 Thread Chinmaya Aggarwal
Hi,

You can try adding the line "path /usr/lib/vpp_plugins/" at the top in 
plugins{} section in startup.conf file and do a VPP restart. That should fix 
your problem.

Thanks,
Chinmaya Agarwal

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22353): https://lists.fd.io/g/vpp-dev/message/22353
Mute This Topic: https://lists.fd.io/mt/95806883/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP crashes with vlib_worker_thread_barrier_syn_int:thread deadlock

2022-12-21 Thread Mechthild Buescher via lists.fd.io
Hi,

We have a severe problem with VPP – it’s cyclic restarting due to the following 
error:

Dec 21 09:46:32 843V0N3 vpp[3846]: vlib_worker_thread_barrier_sync_int: worker 
thread deadlock

This happens on both servers of the setup and it cannot recover. Can you please 
help us to debug this issue?


VPP version:
# vppctl show version
vpp v22.02.0-1~g0d1b46707-dirty built by suse on SUSE at 2022-05-02T09:46:05
which is a built of version 22.02.-1 on SLES 15 SP3 including the dpdk-patch 
0001-add-patch-to-disable-source-pruning-in-i40e-driver.patch

The startup.conf:
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  coredump-size unlimited
  cli-listen /run/vpp/cli.sock
  gid vpp
  startup-config /etc/vpp/vpp-static-config.txt
}

api-trace {
  on
}

api-segment {
  gid vpp
}

socksvr {
  socket-name /var/run/vpp/ic-api.sock
}

memory {
main-heap-page-size 1G
}

cpu {
main-core 2
corelist-workers 4,6,42,44,46
}

buffers {
buffers-per-numa 128000
}

dpdk {
dev default {
num-rx-queues 5
}

uio-driver vfio-pci

dev :3b:00.0 {
name Radio-0
}
dev :3b:00.1 {
name Ext-0
}
dev :5e:02.1 {
name NCIC-1-v1
}
}

plugins {
plugin default  { disable }
plugin dpdk_plugin.so   { enable }
plugin ioam_plugin.so   { enable }
plugin perfmon_plugin.so{ enable }
plugin tracedump_plugin.so  { enable }
plugin l3xc_plugin.so   { enable }
plugin ping_plugin.so   { enable }
plugin avf_plugin.so{ enable }
plugin acl_plugin.so{ enable }
plugin svs_plugin.so{ enable }
plugin vrrp_plugin.so   { enable }
plugin dhcp_plugin.so   { enable }
plugin nat_plugin.so{ enable }
plugin abf_plugin.so{ enable }
plugin lacp_plugin.so   { enable }
plugin flowprobe_plugin.so  { enable }
}

The log gives:
Dec 21 10:33:27 hostname systemd[1]: Starting Vector Packet Processing 
Process...
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + 
dpdk_devbind=/usr/local/bin/dpdk-devbind.py
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + 
VPP_CONF=/etc/vpp/ic-startup.conf
Dec 21 10:33:27 hostname ic-vpp-service.sh[64760]: ++ grep -v '#' 
/etc/vpp/ic-startup.conf
Dec 21 10:33:27 hostname ic-vpp-service.sh[64761]: ++ grep dev
Dec 21 10:33:27 hostname ic-vpp-service.sh[64762]: ++ grep -v default
Dec 21 10:33:27 hostname ic-vpp-service.sh[64763]: ++ sed 's/.*dev //'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64764]: ++ cut '-d ' -f1
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + DEVICES=':3b:00.0
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: :3b:00.1
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: :5e:02.1'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64766]: ++ grep uio-driver 
/etc/vpp/ic-startup.conf
Dec 21 10:33:27 hostname ic-vpp-service.sh[64767]: ++ sed 's/.*uio-driver //'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64768]: ++ cut '-d ' -f1
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + DPDK_DRV=vfio-pci
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + '[' --start == --stop ']'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + for dev in $DEVICES
Dec 21 10:33:27 hostname ic-vpp-service.sh[64770]: ++ 
/usr/local/bin/dpdk-devbind.py -s
Dec 21 10:33:27 hostname ic-vpp-service.sh[64771]: ++ grep :3b:00.0
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + cdev=':3b:00.0 
'\''Ethernet Controller XXV710 for 25GbE SFP28 158b'\'' if=Radio-0 drv=i40e 
unused=vfio-pci '
Dec 21 10:33:27 hostname ic-vpp-service.sh[64952]: ++ get_drv ':3b:00.0 
'\''Ethernet Controller XXV710 for 25GbE SFP28 158b'\'' if=Radio-0 drv=i40e 
unused=vfio-pci '
Dec 21 10:33:27 hostname ic-vpp-service.sh[64952]: ++ [[ -z :3b:00.0 
'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=Radio-0 drv=i40e 
unused=vfio-pci  ]]
Dec 21 10:33:27 hostname ic-vpp-service.sh[64952]: ++ echo i40e
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + drv=i40e
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + check_intf --start 
:3b:00.0 ''\''Ethernet' Controller XXV710 for 25GbE SFP28 '158b'\''' 
if=Radio-0 drv=i40e unused=vfio-pci
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + action=--start
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + shift
Dec 21 10:33:27 hostname ic-vpp-service.sh[64954]: ++ echo :3b:00.0 
''\''Ethernet' Controller XXV710 for 25GbE SFP28 '158b'\''' if=Radio-0 drv=i40e 
unused=vfio-pci
Dec 21 10:33:27 hostname ic-vpp-service.sh[64955]: ++ sed 's/.* if=//'
Dec 21 10:33:27 hostname ic-vpp-service.sh[64956]: ++ cut '-d ' -f1
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + name=Radio-0
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + [[ Radio-0 == *\-\v* ]]
Dec 21 10:33:27 hostname ic-vpp-service.sh[64758]: + '[' --start == --stop ']'
Dec 21 10:33:27 hostname 

Re: [vpp-dev] vpp+dpdk #dpdk

2022-12-19 Thread zheng jie
Have never seen two net devices, even SRIOV devices share same PCI address. 
Will you dump your device via your lspci or or /sys/… , PCI bus addresses are 
always unique?

Personally I thought the PCI addresses in your screenshot are inaccurate.


From:  on behalf of "first_se...@163.com" 

Reply-To: "vpp-dev@lists.fd.io" 
Date: Monday, December 12, 2022 at 11:16 PM
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] vpp+dpdk #dpdk

I have the issue about when the same buf_info with different  name  of device 
like bottom picture,what should I do to bound two device called enp6s0f01d and 
enp6s0f02d. tks
[cid:image001.png@01D913B6.6A9781F0]

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22350): https://lists.fd.io/g/vpp-dev/message/22350
Mute This Topic: https://lists.fd.io/mt/95640416/21656
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+dpdk

2022-12-18 Thread first_semon
Can anyone answer my question? official person? I use the pci called N10

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22346): https://lists.fd.io/g/vpp-dev/message/22346
Mute This Topic: https://lists.fd.io/mt/95640416/21656
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp+dpdk

2022-12-18 Thread first_semon
Can anyone answer my question? official person?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22345): https://lists.fd.io/g/vpp-dev/message/22345
Mute This Topic: https://lists.fd.io/mt/95640416/21656
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp core & bound checking

2022-12-14 Thread hemant via lists.fd.io
Also, gcc once maintained "fat" pointer to store length of array for bound 
checking but removed later due to sanitizers such as valgrind.

https://gcc.gnu.org/wiki/MIRO?action=AttachFile=get=MIRO.pdf

There are bounded for loops being discussed in new programming languages for 
data plane. A bounded for loop is "for ( i < k)" where length of vector is 
stored in type so that the compiler can unroll loop and verify bound.

Hemant

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of hemant via 
lists.fd.io
Sent: Wednesday, December 14, 2022 9:00 AM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] vpp core & bound checking

Hi Ben,

What kind of bound failure is causing crashes? Is a for loop terminator 
exceeding bound or networking data exceeding bound? I can investigate changing 
the gcc compiler to check any bound at compile-time.

Hemant

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Benoit Ganne
(bganne) via lists.fd.io
Sent: Wednesday, November 30, 2022 9:30 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp core & bound checking

Hi everyone,

I'd like to get the discussion started on the topic of bound checking in VPP:
some of us recently discussed a patch that added explicit bound checking 
within VPP core dataplane infrastructure to prevent a crash when the function 
is misused from a plugin. The bug is clearly in the plugin which calls the VPP 
infra function, but it's a hard one to track and when it happens it crashes 
VPP and break networking for users.

I think we do not want to do bound checking in the core VPP dataplane infra 
for performance reasons, and hence callers should be correct or nasty things 
will happen. In case a workaround is really needed, it should be done in the 
caller and probably maintained as a private patch until the bug is properly 
fixed.

What do people think?

Best
ben


smime.p7s
Description: S/MIME cryptographic signature

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22332): https://lists.fd.io/g/vpp-dev/message/22332
Mute This Topic: https://lists.fd.io/mt/95358731/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] vpp core & bound checking

2022-12-14 Thread hemant via lists.fd.io
Hi Ben,

What kind of bound failure is causing crashes? Is a for loop terminator 
exceeding bound or networking data exceeding bound? I can investigate changing 
the gcc compiler to check any bound at compile-time.

Hemant

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Benoit Ganne 
(bganne) via lists.fd.io
Sent: Wednesday, November 30, 2022 9:30 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp core & bound checking

Hi everyone,

I'd like to get the discussion started on the topic of bound checking in VPP: 
some of us recently discussed a patch that added explicit bound checking 
within VPP core dataplane infrastructure to prevent a crash when the function 
is misused from a plugin. The bug is clearly in the plugin which calls the VPP 
infra function, but it's a hard one to track and when it happens it crashes 
VPP and break networking for users.

I think we do not want to do bound checking in the core VPP dataplane infra 
for performance reasons, and hence callers should be correct or nasty things 
will happen. In case a workaround is really needed, it should be done in the 
caller and probably maintained as a private patch until the bug is properly 
fixed.

What do people think?

Best
ben


smime.p7s
Description: S/MIME cryptographic signature

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22330): https://lists.fd.io/g/vpp-dev/message/22330
Mute This Topic: https://lists.fd.io/mt/95358731/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vpp+dpdk #dpdk

2022-12-12 Thread first_semon
I have the issue about when the same buf_info with different  name  of device 
like bottom picture,what should I do to bound two device called enp6s0f01d and 
enp6s0f02d. tks

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22322): https://lists.fd.io/g/vpp-dev/message/22322
Mute This Topic: https://lists.fd.io/mt/95640416/21656
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP & HICN docs verify jobs now upload generated docs to a 7-day retention S3 bucket

2022-12-12 Thread Dave Wallace

Folks,

The VPP and HICN docs verify jobs now upload the generated docs for each 
patch to a 7-day retention bucket in Amazon S3 storage. The upload URL 
for the docs is the same as the log URL for the docs job except that the 
domain name is 's3-docs-7day.fd.io' instead of 's3-logs.fd.io'.


For example, the following VPP docs job:

https://s3-logs.fd.io/vex-yul-rot-jenkins-1/vpp-docs-verify-master-ubuntu2204-x86_64/742/

uploaded the VPP documentation generated for the associated patch to:

https://s3-docs-7day.fd.io/vex-yul-rot-jenkins-1/vpp-docs-verify-master-ubuntu2204-x86_64/742/


Similarly, the HICN docs job:

https://s3-logs.fd.io/vex-yul-rot-jenkins-1/hicn-docs-verify-master-ubuntu2004-x86_64/433/

uploaded the HICN documentation generated for the associated patch to:

https://s3-docs-7day.fd.io/vex-yul-rot-jenkins-1/hicn-docs-verify-master-ubuntu2004-x86_64/433/


If you attempt to access a URL for a docs verify job that is more than 7 
days old, then you will get a 404 error:


https://s3-docs-7day.fd.io/vex-yul-rot-jenkins-1/vpp-docs-verify-master-ubuntu2204-x86_64/658

 %< 
404 Not Found

    Code: NoSuchKey
    Message: The specified key does not exist.
    Key: vex-yul-rot-jenkins-1/vpp-docs-verify-master-ubuntu2204-x86_64/658
    RequestId: VZSP8G8QK91WF3AB
    HostId: 
X6+NkeAuRXpVGgDCq27Yqg7vbg6VdlyqS1utS+nPrJjCjIjDdQ+p6INLPuPfjHdMIWUW40ANht4=


An Error Occurred While Attempting to Retrieve a Custom Error Document

    Code: NoSuchKey
    Message: The specified key does not exist.
    Key: error.html
 %< 

Thanks,
-daw-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22321): https://lists.fd.io/g/vpp-dev/message/22321
Mute This Topic: https://lists.fd.io/mt/95627241/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes on LX2160A platform

2022-12-10 Thread agv100
Hoi Pim,

VPP runs on this board, but with some tricky things.

1. In the past, VPP had build-data/platform/dpaa.mk file , which is build 
parameters for DPDK for dpaa2-based system. It was removed from fdio/vpp 
repository few years ago.
For cross-compiling VPP to that platfom, either for native compilation on the 
platform, you should get it and put to your git clone of recent VPP.

2. Also, you need to find "LSDK" - ported version on platform code repository, 
it also quite outdated (19.something latest), and then copy 
src/plugins/dpdk/buffer.c file to you recent clone. Otherwize VPP will segfault 
on start if it will see interfaces.

Then, you need to build vpp with PLATFORM=dpaa2.

After installation and before start, you should have DPNI interfaces, connected 
to platform DPMACs,  either statically from device path line files, or 
dynamically by restool/scripts. Note, they should not be bound to Linux Kernel 
interfaces. You may use scripts from platform SDK DPDK distribution, or assign 
resources by restool manually.
Then,
export DPRC=dprc.X container env variable, and start VPP,  it will start, it 
will see interfaces, may be even forward packets, but lack of stability make it 
unusefull.

Which system you use on board? I do not know why, but for some reason on board 
with SolidRun binary from June,2022 and earlier VPP sees incoming packets and 
can forward them with amazing performance, but really unstable, especially if 
you try to use Linux-CP plugin, segfaults or hangs each few minutes.
On fresh binaries, as well as Ubuntu-Core currently built from SolidRun 
scripts, VPP cannot see any incoming packets.

I was trying different compilers, different approaches to build, with no luck 
to get it working well.
Then, VPP will start, but I did not manage it to work more or less stable yet

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22310): https://lists.fd.io/g/vpp-dev/message/22310
Mute This Topic: https://lists.fd.io/mt/95379828/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crash while create ipip tunnel after a certain limit

2022-12-08 Thread Sudhir CR via lists.fd.io
To my knowledge there is no such ratio between heapsize and  stackseg.
based on your application needs you can tune these values.
In your case as the number of ipip tunnels are more you may be required to
increase stackseg size to accommodate the counters for those tunnel
interfaces.

Thanks and Regards,
Sudhir

On Fri, Dec 9, 2022 at 12:05 AM Chinmaya Aggarwal 
wrote:

> On Wed, Dec 7, 2022 at 08:04 PM, Sudhir CR wrote:
>
>  heapsize and statseg
>
> Thanks for your response. What should be the ideal ratio between heapsize
> and statseg?
>
> Thanks and Regards,
> Chinmaya Agarwal.
> 
>
>

-- 
NOTICE TO
RECIPIENT This e-mail message and any attachments are 
confidential and may be
privileged. If you received this e-mail in error, 
any review, use,
dissemination, distribution, or copying of this e-mail is 
strictly
prohibited. Please notify us immediately of the error by return 
e-mail and
please delete this message from your system. For more 
information about Rtbrick, please visit us at www.rtbrick.com 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22306): https://lists.fd.io/g/vpp-dev/message/22306
Mute This Topic: https://lists.fd.io/mt/95527187/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crash while create ipip tunnel after a certain limit

2022-12-08 Thread Chinmaya Aggarwal
On Wed, Dec 7, 2022 at 08:04 PM, Sudhir CR wrote:

> 
> heapsize and statseg

Thanks for your response. What should be the ideal ratio between heapsize and 
statseg?

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22305): https://lists.fd.io/g/vpp-dev/message/22305
Mute This Topic: https://lists.fd.io/mt/95527187/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crash while create ipip tunnel after a certain limit

2022-12-07 Thread Sudhir CR via lists.fd.io
Hi Chinmaya Aggarwal,
I can see the "vec_resize_allocate_memory" api in the above stack and you
are telling that  after 7k tunnels this issue is seen.
I suspect this issue could be due to memory exhaust in the system.

Can you please increase heapsize and statseg size in startup.conf file and
check once.

Thanks and regards,
Sudhir



On Thu, Dec 8, 2022 at 4:25 AM Chinmaya Aggarwal 
wrote:

> Hi,
>
> As per our use case, we need to have a large number of ipip tunnels in VPP
> (approx 1). When we try to configure that many tunnels inside VPP,
> after a certain limit it crashes with below core dump:-
>
> Dec 07 20:01:27 j3norvmstm01 vpp[2053130]: ipipCouldn't create
> /tmp/api_post_mortem.2053130
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: received signal SIGABRT, PC
> 0x7f019f8e537f
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #0  0x7f01a0b3ef0b
> 0x7f01a0b3ef0b
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #1  0x7f01a0478c20
> 0x7f01a0478c20
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #2  0x7f019f8e537f gsignal
> + 0x10f
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #3  0x7f019f8cfdb5 abort +
> 0x127
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #4  0x55ca1a5f60e3
> 0x55ca1a5f60e3
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #5  0x7f01a0006065
> vec_resize_allocate_memory + 0x285
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #6  0x55ca1a5f6cb0
> 0x55ca1a5f6cb0
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #7  0x55ca1a5f97f8
> 0x55ca1a5f97f8
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #8  0x55ca1a5faf09
> 0x55ca1a5faf09
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #9  0x7f01a10adefa
> 0x7f01a10adefa
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #10 0x7f01a10b263d
> vnet_register_interface + 0x6ed
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #11 0x7f01a1415094
> ipip_add_tunnel + 0x2c4
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #12 0x7f01a141a4f0
> 0x7f01a141a4f0
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #13 0x7f01a0acdb82
> 0x7f01a0acdb82
> Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #14 0x7f01a0acdce7
> 0x7f01a0acdce7
> Dec 07 20:01:27 j3norvmstm01 vpp[2053130]: Couldn't create
> /tmp/api_post_mortem.2053130
>
> It is able to create only 7362 tunnels and after that VPP crashes.
>
> What could be the possible reason for this crash? Also, is there any limit
> on the number of ipip tunnels (or interface created corresponding to ipip
> tunnels) in VPP?
>
> Thanks and Regards,
> Chinmaya Agarwal.
>
> 
>
>

-- 
NOTICE TO
RECIPIENT This e-mail message and any attachments are 
confidential and may be
privileged. If you received this e-mail in error, 
any review, use,
dissemination, distribution, or copying of this e-mail is 
strictly
prohibited. Please notify us immediately of the error by return 
e-mail and
please delete this message from your system. For more 
information about Rtbrick, please visit us at www.rtbrick.com 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22301): https://lists.fd.io/g/vpp-dev/message/22301
Mute This Topic: https://lists.fd.io/mt/95527187/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP crash while create ipip tunnel after a certain limit

2022-12-07 Thread Chinmaya Aggarwal
Hi,

As per our use case, we need to have a large number of ipip tunnels in VPP 
(approx 1). When we try to configure that many tunnels inside VPP, after a 
certain limit it crashes with below core dump:-

Dec 07 20:01:27 j3norvmstm01 vpp[2053130]: ipipCouldn't create 
/tmp/api_post_mortem.2053130
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: received signal SIGABRT, PC 
0x7f019f8e537f
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #0  0x7f01a0b3ef0b 
0x7f01a0b3ef0b
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #1  0x7f01a0478c20 
0x7f01a0478c20
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #2  0x7f019f8e537f gsignal + 
0x10f
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #3  0x7f019f8cfdb5 abort + 0x127
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #4  0x55ca1a5f60e3 
0x55ca1a5f60e3
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #5  0x7f01a0006065 
vec_resize_allocate_memory + 0x285
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #6  0x55ca1a5f6cb0 
0x55ca1a5f6cb0
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #7  0x55ca1a5f97f8 
0x55ca1a5f97f8
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #8  0x55ca1a5faf09 
0x55ca1a5faf09
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #9  0x7f01a10adefa 
0x7f01a10adefa
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #10 0x7f01a10b263d 
vnet_register_interface + 0x6ed
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #11 0x7f01a1415094 
ipip_add_tunnel + 0x2c4
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #12 0x7f01a141a4f0 
0x7f01a141a4f0
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #13 0x7f01a0acdb82 
0x7f01a0acdb82
Dec 07 20:01:27 j3norvmstm01 vnet[2053130]: #14 0x7f01a0acdce7 
0x7f01a0acdce7
Dec 07 20:01:27 j3norvmstm01 vpp[2053130]: Couldn't create 
/tmp/api_post_mortem.2053130

It is able to create only 7362 tunnels and after that VPP crashes.

What could be the possible reason for this crash? Also, is there any limit on 
the number of ipip tunnels (or interface created corresponding to ipip tunnels) 
in VPP?

Thanks and Regards,
Chinmaya Agarwal.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22299): https://lists.fd.io/g/vpp-dev/message/22299
Mute This Topic: https://lists.fd.io/mt/95527187/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes on LX2160A platform

2022-12-01 Thread Pim van Pelt via lists.fd.io
Hoi,

VPP does not run on that board, because the soc does not enumerate its DPDK
enabled interfaces on the PCIe bus, but rather has a custom bus, which VPP
is not integrated with.
Incidentally, I did get VPP itself to run but without DPDK (or AVF or etc)
interfaces; and its basic functionality (ie create loop, tunnel, tap) did
seem to work, just no accelerated interfaces.
There were also issues with the 25G and 100G ports on the serdes, it seemed
to only want to run 8x10G.

Ben also had a few insights on my previous post to vpp-dev@ list:
https://lists.fd.io/g/vpp-dev/message/21984

groet,
Pim

On Thu, Dec 1, 2022 at 12:59 PM  wrote:

> Dear VPP community,
>
>
> I'm trying to operate VPP on SolidRun LX2160 board, which is based on 16
> cores A72 NXP SoC, unfortunately, with little success. Does anybody have
> any experience with running VPP on such boards?
>
>
> The performance in my tests is quite good (more then 4mpps NDR) , but VPP
> works very unstable and segfaults in time interval from seconds to hours
> after start.
> The events causing segfaults were not identified. It may happen (and
> usually) when you walk through CLI. It may happen (less frequently) when
> just forwarding packets without a touch to vppctl. Applying config longer
> then few lines usually cause that. Second VPPCTL connection usually couses
> that,
>
> I was trying the following versions of VPP with literally same results:
>
> - VPP 21.01 from LSDK distribution, built on the board natively
> - VPP 22.10, from Master branch, crossbuilt using
> https://docs.nxp.com/bundle/GUID-87AD3497-0BD4-4492-8040-3F3BE0F2B087/page/GUID-8A75A4AD-2EB9-4A5A-A784-465B98E67951.html
> - VPP 22.08, built using flexbuild tool (from same link above).
>
> I was trying different settings of main_heap memory pool (size, pagesize),
> different hugepages settings (standard 4k, huge 2M, huge 1G), but there
> were no serious improvement. It looks like 22.08 most stable and may last
> for few hours.
>
> As performance looks promising, I'm really looking forward to make it work
> stable. Can somebody please  advice , where do I need to look at  to fix
> the problem? There are , according to CSIT, good results on other ARM v8
> platforms.
> As for OS, I'm using pre-built Ubuntu Core-based distribution from
> SolidRun.
>
> See below OS information, logs with crash. See in attachement: Platform
> dmesg and GDB trace of 22.10 crash.
> Below are system logs of VPP crashes.
>
> abramov@nc2s5:~$ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=20.04
> DISTRIB_CODENAME=focal
> DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"
> abramov@nc2s5:~$ uname -a
> Linux nc2s5 5.10.35-00018-gbb124648d42c #1 SMP PREEMPT Wed May 11 17:07:05
> UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
> abramov@nc2s5:~$
>
>
> Dec 01 10:35:42 nc2s5 vnet[2259]: received signal SIGSEGV, PC unsupported,
> faulting address 0x2d3ba50a885
> Dec 01 10:35:42 nc2s5 vnet[2259]: #0  0xa7df2e2c 0xa7df2e2c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #1  0xa95ad588 0xa95ad588
> Dec 01 10:35:42 nc2s5 vnet[2259]: #2  0xa7da0090
> vlib_node_runtime_sync_stats + 0x0
> Dec 01 10:35:42 nc2s5 vnet[2259]: #3  0xa7da191c
> vlib_node_sync_stats + 0x4c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #4  0xa7dd973c
> vlib_worker_thread_barrier_release + 0x45c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #5  0xa7de6ef4 0xa7de6ef4
> Dec 01 10:35:42 nc2s5 vnet[2259]: #6  0xa7de827c 0xa7de827c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #7  0xa7df00dc 0xa7df00dc
> Dec 01 10:35:42 nc2s5 vnet[2259]: #8  0xa7da5e04 vlib_main + 0x8f4
> Dec 01 10:35:42 nc2s5 vnet[2259]: #9  0xa7df1d8c 0xa7df1d8c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #10 0xa7c36f8c clib_calljmp +
> 0x24
>
> Dec 01 10:26:56 nc2s5 vnet[2232]: received signal SIGSEGV, PC unsupported,
> faulting address 0x208
> Dec 01 10:26:56 nc2s5 vnet[2232]: #0  0xa4bebe2c 0xa4bebe2c
> Dec 01 10:26:56 nc2s5 vnet[2232]: #1  0xa63a6588 0xa63a6588
> Dec 01 10:26:56 nc2s5 vnet[2232]: #2  0xa6340aa8 0xa6340aa8
> Dec 01 10:26:56 nc2s5 vnet[2232]: #3  0xa4b9f150 vlib_main + 0xc40
> Dec 01 10:26:56 nc2s5 vnet[2232]: #4  0xa4bead8c 0xa4bead8c
> Dec 01 10:26:56 nc2s5 vnet[2232]: #5  0xa4a2ff8c clib_calljmp +
> 0x24
> 
>
>

-- 
Pim van Pelt 
PBVP1-RIPE - http://www.ipng.nl/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22265): https://lists.fd.io/g/vpp-dev/message/22265
Mute This Topic: https://lists.fd.io/mt/95379828/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [SUSPECTED SPAM] [vpp-dev] VPP crashes on LX2160A platform

2022-12-01 Thread Benoit Ganne (bganne) via lists.fd.io
Hi,

I think the 1st thing to try would be whether you can reproduce it with a debug 
build. The backtrace you have is not usable unfortunately.
The easiest should be to build debug binaries:
~# make rebuild
~# ./build-root/install-vpp_debug-native/vpp/bin/vpp -c /etc/vpp/startup.conf

Best
Ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of
> agv...@gmail.com
> Sent: Thursday, December 1, 2022 12:59
> To: vpp-dev@lists.fd.io
> Subject: [SUSPECTED SPAM] [vpp-dev] VPP crashes on LX2160A platform
> 
> Dear VPP community,
> 
> 
> I'm trying to operate VPP on SolidRun LX2160 board, which is based on 16
> cores A72 NXP SoC, unfortunately, with little success. Does anybody have
> any experience with running VPP on such boards?
> 
> 
> The performance in my tests is quite good (more then 4mpps NDR) , but VPP
> works very unstable and segfaults in time interval from seconds to hours
> after start.
> The events causing segfaults were not identified. It may happen (and
> usually) when you walk through CLI. It may happen (less frequently) when
> just forwarding packets without a touch to vppctl. Applying config longer
> then few lines usually cause that. Second VPPCTL connection usually couses
> that,
> 
> I was trying the following versions of VPP with literally same results:
> 
> - VPP 21.01 from LSDK distribution, built on the board natively
> - VPP 22.10, from Master branch, crossbuilt using
> https://docs.nxp.com/bundle/GUID-87AD3497-0BD4-4492-8040-
> 3F3BE0F2B087/page/GUID-8A75A4AD-2EB9-4A5A-A784-465B98E67951.html
> - VPP 22.08, built using flexbuild tool (from same link above).
> 
> I was trying different settings of main_heap memory pool (size, pagesize),
> different hugepages settings (standard 4k, huge 2M, huge 1G), but there
> were no serious improvement. It looks like 22.08 most stable and may last
> for few hours.
> 
> As performance looks promising, I'm really looking forward to make it work
> stable. Can somebody please  advice , where do I need to look at  to fix
> the problem? There are , according to CSIT, good results on other ARM v8
> platforms.
> As for OS, I'm using pre-built Ubuntu Core-based distribution from
> SolidRun.
> 
> See below OS information, logs with crash. See in attachement: Platform
> dmesg and GDB trace of 22.10 crash.
> Below are system logs of VPP crashes.
> 
> abramov@nc2s5:~$ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=20.04
> DISTRIB_CODENAME=focal
> DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"
> abramov@nc2s5:~$ uname -a
> Linux nc2s5 5.10.35-00018-gbb124648d42c #1 SMP PREEMPT Wed May 11 17:07:05
> UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
> abramov@nc2s5:~$
> 
> 
> Dec 01 10:35:42 nc2s5 vnet[2259]: received signal SIGSEGV, PC unsupported,
> faulting address 0x2d3ba50a885
> Dec 01 10:35:42 nc2s5 vnet[2259]: #0  0xa7df2e2c 0xa7df2e2c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #1  0xa95ad588 0xa95ad588
> Dec 01 10:35:42 nc2s5 vnet[2259]: #2  0xa7da0090
> vlib_node_runtime_sync_stats + 0x0
> Dec 01 10:35:42 nc2s5 vnet[2259]: #3  0xa7da191c
> vlib_node_sync_stats + 0x4c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #4  0xa7dd973c
> vlib_worker_thread_barrier_release + 0x45c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #5  0xa7de6ef4 0xa7de6ef4
> Dec 01 10:35:42 nc2s5 vnet[2259]: #6  0xa7de827c 0xa7de827c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #7  0xa7df00dc 0xa7df00dc
> Dec 01 10:35:42 nc2s5 vnet[2259]: #8  0xa7da5e04 vlib_main + 0x8f4
> Dec 01 10:35:42 nc2s5 vnet[2259]: #9  0xa7df1d8c 0xa7df1d8c
> Dec 01 10:35:42 nc2s5 vnet[2259]: #10 0xa7c36f8c clib_calljmp +
> 0x24
> 
> Dec 01 10:26:56 nc2s5 vnet[2232]: received signal SIGSEGV, PC unsupported,
> faulting address 0x208
> Dec 01 10:26:56 nc2s5 vnet[2232]: #0  0xa4bebe2c 0xa4bebe2c
> Dec 01 10:26:56 nc2s5 vnet[2232]: #1  0xa63a6588 0xa63a6588
> Dec 01 10:26:56 nc2s5 vnet[2232]: #2  0xa6340aa8 0xa6340aa8
> Dec 01 10:26:56 nc2s5 vnet[2232]: #3  0xa4b9f150 vlib_main + 0xc40
> Dec 01 10:26:56 nc2s5 vnet[2232]: #4  0xa4bead8c 0xa4bead8c
> Dec 01 10:26:56 nc2s5 vnet[2232]: #5  0xa4a2ff8c clib_calljmp +
> 0x24

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22264): https://lists.fd.io/g/vpp-dev/message/22264
Mute This Topic: https://lists.fd.io/mt/95380982/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP crashes on LX2160A platform

2022-12-01 Thread agv100
Dear VPP community,

I'm trying to operate VPP on SolidRun LX2160 board, which is based on 16 cores 
A72 NXP SoC, unfortunately, with little success. Does anybody have any 
experience with running VPP on such boards?

The performance in my tests is quite good (more then 4mpps NDR) , but VPP works 
very unstable and segfaults in time interval from seconds to hours after start.
The events causing segfaults were not identified. It may happen (and usually) 
when you walk through CLI. It may happen (less frequently) when just forwarding 
packets without a touch to vppctl. Applying config longer then few lines 
usually cause that. Second VPPCTL connection usually couses that,

I was trying the following versions of VPP with literally same results:

- VPP 21.01 from LSDK distribution, built on the board natively
- VPP 22.10, from Master branch, crossbuilt using 
https://docs.nxp.com/bundle/GUID-87AD3497-0BD4-4492-8040-3F3BE0F2B087/page/GUID-8A75A4AD-2EB9-4A5A-A784-465B98E67951.html
- VPP 22.08, built using flexbuild tool (from same link above).

I was trying different settings of main_heap memory pool (size, pagesize), 
different hugepages settings (standard 4k, huge 2M, huge 1G), but there were no 
serious improvement. It looks like 22.08 most stable and may last for few hours.

As performance looks promising, I'm really looking forward to make it work 
stable. Can somebody please  advice , where do I need to look at  to fix the 
problem? There are , according to CSIT, good results on other ARM v8 platforms.
As for OS, I'm using pre-built Ubuntu Core-based distribution from SolidRun.

See below OS information, logs with crash. See in attachement: Platform dmesg 
and GDB trace of 22.10 crash.
Below are system logs of VPP crashes.

abramov@nc2s5:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"
abramov@nc2s5:~$ uname -a
Linux nc2s5 5.10.35-00018-gbb124648d42c #1 SMP PREEMPT Wed May 11 17:07:05 UTC 
2022 aarch64 aarch64 aarch64 GNU/Linux
abramov@nc2s5:~$

Dec 01 10:35:42 nc2s5 vnet[2259]: received signal SIGSEGV, PC unsupported, 
faulting address 0x2d3ba50a885
Dec 01 10:35:42 nc2s5 vnet[2259]: #0  0xa7df2e2c 0xa7df2e2c
Dec 01 10:35:42 nc2s5 vnet[2259]: #1  0xa95ad588 0xa95ad588
Dec 01 10:35:42 nc2s5 vnet[2259]: #2  0xa7da0090 
vlib_node_runtime_sync_stats + 0x0
Dec 01 10:35:42 nc2s5 vnet[2259]: #3  0xa7da191c vlib_node_sync_stats + 
0x4c
Dec 01 10:35:42 nc2s5 vnet[2259]: #4  0xa7dd973c 
vlib_worker_thread_barrier_release + 0x45c
Dec 01 10:35:42 nc2s5 vnet[2259]: #5  0xa7de6ef4 0xa7de6ef4
Dec 01 10:35:42 nc2s5 vnet[2259]: #6  0xa7de827c 0xa7de827c
Dec 01 10:35:42 nc2s5 vnet[2259]: #7  0xa7df00dc 0xa7df00dc
Dec 01 10:35:42 nc2s5 vnet[2259]: #8  0xa7da5e04 vlib_main + 0x8f4
Dec 01 10:35:42 nc2s5 vnet[2259]: #9  0xa7df1d8c 0xa7df1d8c
Dec 01 10:35:42 nc2s5 vnet[2259]: #10 0xa7c36f8c clib_calljmp + 0x24

Dec 01 10:26:56 nc2s5 vnet[2232]: received signal SIGSEGV, PC unsupported, 
faulting address 0x208
Dec 01 10:26:56 nc2s5 vnet[2232]: #0  0xa4bebe2c 0xa4bebe2c
Dec 01 10:26:56 nc2s5 vnet[2232]: #1  0xa63a6588 0xa63a6588
Dec 01 10:26:56 nc2s5 vnet[2232]: #2  0xa6340aa8 0xa6340aa8
Dec 01 10:26:56 nc2s5 vnet[2232]: #3  0xa4b9f150 vlib_main + 0xc40
Dec 01 10:26:56 nc2s5 vnet[2232]: #4  0xa4bead8c 0xa4bead8c
Dec 01 10:26:56 nc2s5 vnet[2232]: #5  0xa4a2ff8c clib_calljmp + 0x24
Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0xf67500f4 in vlib_node_runtime_update (next_index=, 
node_index=518, vm=0x363bf700)
at /home/k.lakaev/vpp/vpp-github/src/vlib/node.c:122
122 /home/k.lakaev/vpp/vpp-github/src/vlib/node.c: No such file or 
directory.


(gdb) bt full
#0  0xf67500f4 in vlib_node_runtime_update (next_index=, 
node_index=518, vm=0x363bf700)
at /home/k.lakaev/vpp/vpp-github/src/vlib/node.c:122
nm = 0x363bf8c8
j = 
r = 
node = 
pf = 
s = 
next_node = 
nf = 
i = 1501
n_insert = 
nm = 
r = 
s = 
node = 
next_node = 
nf = 
pf = 
i = 
j = 
n_insert = 
__FUNCTION__ = 
#1  vlib_node_add_next_with_slot (vm=0x363bf700, 
node_index=node_index@entry=518, next_node_index=692, slot=2,
slot@entry=18446744073709551615) at 
/home/k.lakaev/vpp/vpp-github/src/vlib/node.c:217
nm = 0x363bf8c8
node = 0x4a131710
next = 0x58bf3e20
old_next = 
--Type  for more, q to quit, c to continue without paging--
old_next_index = 
p = 
__FUNCTION__ = "vlib_node_add_next_with_slot"
#2  0xf70fe618 in vlib_node_add_next (next_node=, 
node=518, vm=)
at /home/k.lakaev/vpp/vpp-github/src/vlib/node_funcs.h:1273
No 

  1   2   3   4   5   6   7   8   9   10   >