Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-26 Thread agv100
Hi Stanislav,

Situation is better now, but still only half of the problem solved :-)
OSI IS-IS packets passed from network to tap, but not passed from tap to 
network.
These are on TAP interface, where:78 is VPP-based router, :7a is non-VPP peer, 
both directions are seen:
13:03:22.911887 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497
13:03:23.433773 3c:ec:ef:5f:77:8f > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497
,
These are on opposite side of link, (Linux IS-IS router without VPP), only 
outgoing packets are seen:
13:08:54.796588 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497
13:08:57.662629 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497

Also, it looks like lcp auto-subint is broken ; VPP aborts on ip link add  on TAP device, instead of creating subif. I'll provide back trace later 
on

Jan 26 12:57:04 tn3 vnet[1133419]: unix_signal_handler:191: received signal 
SIGWINCH, PC 0x7fdd59a34f41
Jan 26 12:57:11 tn3 vnet[1133419]: received signal SIGWINCH, PC 0x7fdd59a34f41
Jan 26 12:57:11 tn3 vnet[1133419]: #0  0x7fdd59a95c92 unix_signal_handler + 
0x1f2
Jan 26 12:57:11 tn3 vnet[1133419]: #1  0x7fdd59993420 0x7fdd59993420
Jan 26 12:57:11 tn3 vnet[1133419]: #2  0x7fdd5a5b8f00 
virtio_refill_vring_split + 0x60
Jan 26 12:57:11 tn3 vnet[1133419]: #3  0x7fdd5a5b7f52 
virtio_device_input_inline + 0x2f2
Jan 26 12:57:11 tn3 vnet[1133419]: #4  0x7fdd5a5b7acb 
virtio_input_node_fn_skx + 0x19b
Jan 26 12:57:11 tn3 vnet[1133419]: #5  0x7fdd59a3515d dispatch_node + 0x33d
Jan 26 12:57:11 tn3 vnet[1133419]: #6  0x7fdd59a30c72 
vlib_main_or_worker_loop + 0x632
Jan 26 12:57:11 tn3 vnet[1133419]: #7  0x7fdd59a3277a vlib_main_loop + 0x1a
Jan 26 12:57:11 tn3 vnet[1133419]: #8  0x7fdd59a3229a vlib_main + 0x60a
Jan 26 12:57:11 tn3 vnet[1133419]: #9  0x7fdd59a94a14 thread0 + 0x44
Jan 26 12:57:11 tn3 vnet[1133419]: #10 0x7fdd598e43d8 0x7fdd598e43d8

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22511): https://lists.fd.io/g/vpp-dev/message/22511
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-25 Thread agv100
Hi Stanislav!

Here is it!

00:31:48:504910: dpdk-input
TenGigabitEthernet1c/0/1 rx queue 0
buffer 0x9ad69: current data 0, length 1518, buffer-pool 0, ref-count 1, trace 
handle 0xb
ext-hdr-valid
PKT MBUF: port 0, nb_segs 1, pkt_len 1518
buf_len 2176, data_len 1518, ol_flags 0x180, data_off 128, phys_addr 0x188b5ac0
packet_type 0x1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
PKT_RX_IP_CKSUM_NONE (0x0090) no IP cksum of RX pkt.
PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
PKT_RX_L4_CKSUM_NONE (0x0108) no L4 cksum of RX pkt.
Packet Types
RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
0x05dc: 3c:ec:ef:5f:78:7a -> 09:00:2b:00:00:05 802.1q vlan 1914
00:31:48:504913: ethernet-input
frame: flags 0x3, hw-if-index 1, sw-if-index 1
0x05dc: 3c:ec:ef:5f:78:7a -> 09:00:2b:00:00:05 802.1q vlan 1914
00:31:48:504917: llc-input
LLC osi_layer5 -> osi_layer5
00:31:48:504918: osi-input
OSI isis
00:31:48:504919: error-drop
rx:TenGigabitEthernet1c/0/1.1914
00:31:48:504920: drop
osi-input: unknown osi protocol

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22509): https://lists.fd.io/g/vpp-dev/message/22509
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP Linux-CP/Linux-NL : MPLS?

2023-01-24 Thread agv100
Hello,

I'm trying to populate MPLS FIB via Linux-CP plugin.
MPLS records are created via FRR and populated to Linux Kernel routing table (I 
use default ns). Below one can see "push" operation and "swap" operation.
mpls table 0 was created in vpp by "mpls table add 0" command.
mpls was enabled on all the interfaces, both towards media and taps. Still, do 
not see anything in FIB. Should MPLS tables sync work, or may be, I forgot 
setup something in VPP?

root@tn3:/home/abramov# ip -f mpls route show
40050 as to 41000 via inet6 fd00:200::2 dev Ten0.1914 proto static
root@tn3:/home/abramov# ip -6 route show | grep 4
fd00:100::4 nhid 209  encap mpls  4 via fd00:200::2 dev Ten0.1914 proto 
static metric 20 pref medium
root@tn3:/home/abramov# vppctl

vpp# show mpls fib 0 40050
MPLS-VRF:0, fib_index:1 locks:[interface:4, CLI:1, ]
vpp# show ip6 fib
ipv6-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[adjacency:1, default-route:1, lcp-rt:1, ]
::/0
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:6 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd00:100::4/128
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:17 buckets:1 uRPF:17 to:[0:0]]
[0] [@5]: ipv6 via fd00:200::2 TenGigabitEthernet1c/0/1.1914: mtu:9000 next:5 
flags:[] 2af08d2cf6163cecef5f778f8100077a86dd
fd00:200::/64
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:15 buckets:1 uRPF:14 to:[0:0]]
[0] [@4]: ipv6-glean: [src:fd00:200::/64] TenGigabitEthernet1c/0/1.1914: 
mtu:9000 next:2 flags:[] 3cecef5f778f8100077a86dd
fd00:200::1/128
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:16 buckets:1 uRPF:15 to:[10:848]]
[0] [@20]: dpo-receive: fd00:200::1 on TenGigabitEthernet1c/0/1.1914
fd00:200::2/128
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:18 buckets:1 uRPF:12 to:[0:0]]
[0] [@5]: ipv6 via fd00:200::2 TenGigabitEthernet1c/0/1.1914: mtu:9000 next:5 
flags:[] 2af08d2cf6163cecef5f778f8100077a86dd
fe80::/10
unicast-ip6-chain
[@0]: dpo-load-balance: [proto:ip6 index:7 buckets:1 uRPF:6 to:[8:544]]
[0] [@14]: ip6-link-local
vpp# show mpls fib
MPLS-VRF:0, fib_index:1 locks:[interface:4, CLI:1, ]
ip4-explicit-null:neos/21 fib:1 index:30 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[43] locks:2 flags:exclusive, uPRF-list:31 len:0 itfs:[]
path:[53] pl-index:43 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dst-address,unicast lookup in interface's mpls table

forwarding:   mpls-neos-chain
[@0]: dpo-load-balance: [proto:mpls index:33 buckets:1 uRPF:31 to:[0:0]]
[0] [@4]: dst-address,unicast lookup in interface's mpls table
ip4-explicit-null:eos/21 fib:1 index:29 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[42] locks:2 flags:exclusive, uPRF-list:30 len:0 itfs:[]
path:[52] pl-index:42 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dst-address,unicast lookup in interface's ip4 table

forwarding:   mpls-eos-chain
[@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:30 to:[0:0]]
[0] [@3]: dst-address,unicast lookup in interface's ip4 table
router-alert:neos/21 fib:1 index:27 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[40] locks:2 flags:exclusive, uPRF-list:28 len:0 itfs:[]
path:[50] pl-index:40 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dpo-punt

forwarding:   mpls-neos-chain
[@0]: dpo-load-balance: [proto:mpls index:30 buckets:1 uRPF:28 to:[0:0]]
[0] [@2]: dpo-punt
router-alert:eos/21 fib:1 index:28 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[41] locks:2 flags:exclusive, uPRF-list:29 len:0 itfs:[]
path:[51] pl-index:41 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dpo-punt

forwarding:   mpls-eos-chain
[@0]: dpo-load-balance: [proto:mpls index:31 buckets:1 uRPF:29 to:[0:0]]
[0] [@2]: dpo-punt
ipv6-explicit-null:neos/21 fib:1 index:32 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[45] locks:2 flags:exclusive, uPRF-list:33 len:0 itfs:[]
path:[55] pl-index:45 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dst-address,unicast lookup in interface's mpls table

forwarding:   mpls-neos-chain
[@0]: dpo-load-balance: [proto:mpls index:35 buckets:1 uRPF:33 to:[0:0]]
[0] [@4]: dst-address,unicast lookup in interface's mpls table
ipv6-explicit-null:eos/21 fib:1 index:31 locks:2
special refs:1 entry-flags:exclusive, src-flags:added,contributing,active,
path-list:[44] locks:2 flags:exclusive, uPRF-list:32 len:0 itfs:[]
path:[54] pl-index:44 mpls weight=1 pref=0 exclusive:  oper-flags:resolved, 
cfg-flags:exclusive,
[@0]: dst-address,unicast lookup in interface's ip6 table

forwarding:   mpls-eos-chain
[@0]: dpo-load-balance: [proto:mpls in

Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-24 Thread agv100
Hi Stanislav,

Unfortunately, your patch didn't help. VPP builds, but IS-IS packets still 
cannot be passed between the CP and the wire.

Furthermore, it looks like LCP lcp-auto-subint feature was broken:

root@tn3:/home/abramov/vpp# vppctl
___    _    _   _  ___
__/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /(_)_/\___/   |___/_/  /_/

vpp#
vpp#
vpp#
vpp# show interface
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
TenGigabitEthernet1c/0/1  1 down 9000/0/0/0
local0    0 down  0/0/0/0
vpp# set interface state TenGigabitEthernet1c/0/1 up
vpp# lcp create 1 host-if Ten0
vpp# show interface
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
TenGigabitEthernet1c/0/1  1  up  9000/0/0/0 rx packets  
    2451
rx bytes  228627
tx packets 7
tx bytes 746
drops   2451
ip4    9
ip6    2
local0    0 down  0/0/0/0
tap1  2  up  9000/0/0/0 rx packets  
   7
rx bytes 746
ip6    7
vpp# quit
root@tn3:/home/abramov/vpp# ip link set Ten0 up
root@tn3:/home/abramov/vpp# vppctl
___    _    _   _  ___
__/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /(_)_/\___/   |___/_/  /_/

vpp# lcp lcp
lcp-auto-subint  lcp-sync
vpp# lcp lcp-auto-subint on
vpp# lcp lcp-sync on
vpp# show lcp
lcp default netns ''
lcp lcp-auto-subint on
lcp lcp-sync on
lcp del-static-on-link-down off
lcp del-dynamic-on-link-down off
itf-pair: [0] TenGigabitEthernet1c/0/1 tap1 Ten0 1248 type tap
vpp# quit
root@tn3:/home/abramov/vpp# ip link add Ten0.1914 link Ten0 type vlan id 1914
root@tn3:/home/abramov/vpp# ip link set Ten0.1914 up
root@tn3:/home/abramov/vpp# vppctl
___    _    _   _  ___
__/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /(_)_/\___/   |___/_/  /_/

vpp# show int
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
TenGigabitEthernet1c/0/1  1  up  9000/0/0/0 rx packets  
   16501
rx bytes 1519839
tx packets 7
tx bytes 746
drops  16501
ip4   39
ip6    8
local0    0 down  0/0/0/0
tap1  2  up  9000/0/0/0 rx packets  
  17
rx bytes   19710
drops 10
ip6    7

vpp# show node counters
Count  Node  Reason   
Severity
10 lldp-input    lldp packets received on disabled i   error
516 dpdk-input  no error    
error
21    arp-disabled   ARP Disabled  error
74 osi-input unknown osi protocol  error
5 snap-input unknown oui/snap protocol    error
11   ethernet-input unknown ethernet type  error
74127   ethernet-input  unknown vlan  
error
145   ethernet-input   subinterface down    
error
vpp#

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22501): https://lists.fd.io/g/vpp-dev/message/22501
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP LCP: IS-IS does not work

2023-01-23 Thread agv100
Hoi Pim,

As for distinguishing  IS-IS packets, I think that should not be really 
difficult,  it's just all the packets with specific DST MACs: 
09:00:2b:00:00:05, 09:00:2b:00:00:14,09:00:2b:00:00:15.
It's hard to imagine situation when they are needed to be processed by 
DataPlane.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22490): https://lists.fd.io/g/vpp-dev/message/22490
Mute This Topic: https://lists.fd.io/mt/96476162/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP LCP: IS-IS does not work

2023-01-23 Thread agv100
Dear VPP community,

I'm trying to set up IS-IS neighborship with node running VPP22.10 + LCP plugin 
+ FRR as control plane software, with no results.

What I can see, looks like VPP does not pass IIH packet between network and TAP 
interface, both directions.
On node running VPP, when tcpdumping host TAP interface I see outgoing IS-IS 
IIHs:
15:12:27.195439 3c:ec:ef:5f:77:8f > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0001, length 1497
They are not appears on opposite node (it runs frr/isisd without VPP).
Only outgoing IIH packets are seen.
15:29:13.192912 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0002, length 1497
15:29:15.942959 3c:ec:ef:5f:78:7a > 09:00:2b:00:00:05, 802.3, length 1500: LLC, 
dsap OSI (0xfe) Individual, ssap OSI (0xfe) Command, ctrl 0x03: OSI NLPID IS-IS 
(0x83): p2p IIH, src-id ..0002, length 1497

Meanwhile, IP connectivity between the nodes exist. Here you can see ICMP 
exchane, as we can see it on TAP interface of VPP host
15:24:15.169021 3c:ec:ef:5f:77:8f > 3c:ec:ef:5f:78:7a, ethertype IPv4 (0x0800), 
length 98: 10.114.1.1 > 10.114.1.100: ICMP echo request, id 144, seq 12, length 
64
15:24:15.169275 3c:ec:ef:5f:78:7a > 3c:ec:ef:5f:77:8f, ethertype IPv4 (0x0800), 
length 98: 10.114.1.100 > 10.114.1.1: ICMP echo reply, id 144, seq 12, length 64
15:24:15.329025 3c:ec:ef:5f:77:8f > 3c:ec:ef:5f:78:7a, ethertype IPv4 (0x0800), 
length 98: 10.114.1.1 > 10.114.1.100: ICMP echo request, id 122, seq 61503, 
length 64
15:24:15.329304 3c:ec:ef:5f:78:7a > 3c:ec:ef:5f:77:8f, ethertype IPv4 (0x0800), 
length 98: 10.114.1.100 > 10.114.1.1: ICMP echo reply, id 122, seq 61503, 
length 64

OSPF neighborship also can be established, so problem is IS-IS related.
tn3# show ipv6 ospf6 neighbor
Neighbor ID Pri    DeadTime    State/IfState Duration I/F[State]
20.20.20.1    1    00:00:38 Full/DR  00:07:21 Ten0.1914[BDR]
tn3#

What I found, show node counters says osi-input unknown osi protocol increasing.

Count  Node  Reason   
Severity
84 lldp-input    lldp packets received on disabled i   error
4364 dpdk-input  no error    
error
20 arp-reply   ARP replies sent    info
9 arp-reply IP4 source address matches local in   error
19 arp-reply ARP request IP4 source address lear   info
43    arp-disabled   ARP Disabled  error
1252 osi-input unknown osi protocol  
error
4 ip6-input    ip6 source lookup miss error
19    ip6-local-hop-by-hop   Unknown protocol ip6 local h-b-h pa   error
10 ip4-local    ip4 source lookup miss error
4   ip6-icmp-input  neighbor solicitations for unknown    error
4   ip6-icmp-input  neighbor advertisements sent  info
106   ip6-icmp-input   neighbor discovery not configured    
error
42 snap-input unknown oui/snap protocol    error
49   ethernet-input unknown ethernet type  error
623375   ethernet-input  unknown vlan  
error
1   ethernet-input   subinterface down    error

On the other hand, I can see IS-IS protocol in src/vnet/osi/osi.h

#define foreach_osi_protocol    \
_ (null, 0x0) \
_ (x_29, 0x01)    \
_ (x_633, 0x03)   \
_ (q_931, 0x08)   \
_ (q_933, 0x08)   \
_ (q_2931, 0x09)  \
_ (q_2119, 0x0c)  \
_ (snap, 0x80)    \
_ (clnp, 0x81)    \
_ (esis, 0x82)    \
_ (isis, 0x83)    \
_ (idrp, 0x85)    \
_ (x25_esis, 0x8a)    \
_ (iso10030, 0x8c)    \
_ (iso11577, 0x8d)    \
_ (ip6, 0x8e) \
_ (compressed, 0xb0)  \
_ (sndcf, 0xc1)   \
_ (ip4, 0xcc) \
_ (ppp, 0xcf)

So protocol should not be "unknown".

Any ideas where I need to look at to fix the issue with IS-IS?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22488): https://lists.fd.io/g/vpp-dev/message/2248

[vpp-dev] VPP: LX2160a failed to start - dpaa2_dev_rx_queue_setup

2023-01-19 Thread agv100
Hello,

I was trying to check "vanilla" VPP, without integration of patches from NXP: 
it works well on LX2160A board with PCIE NICs (I'm using IGBs), but fails to 
start with DPAA2 ports enabled (NXP-patched version starts with DPAA2, but very 
unstable with both DPAA2 and PCIE ethernets).

"Vanilla" VPP was built on board natively, by simple VPP build instructions, 
either directly on in vagrant.
Different versions were checked:
- 21.06
- 22.10
- 23.02 RC as-is, and with patch to bump DPDK 22.11, all are with same results, 
PCIE NICs only configuration works well and stable, crashes on initialization 
of DPAA2 devices.

So, when DPAA2 ports are enabled, and DPRC=dprc.X pointing to container with 
interfaces is set, we see following on start of standard vanilla VPP (traces 
are identical for versions above). Any ideas where to look to fix an issue?

(gdb) run -c /etc/vpp/startup.conf
Starting program: 
/home/abramov/vpp-stable/build-root/install-vpp_debug-native/vpp/bin/vpp -c 
/etc/vpp/startup.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[New Thread 0x716e6140 (LWP 1679)]
[New Thread 0x70ee5140 (LWP 1680)]
[New Thread 0x6bfff140 (LWP 1681)]

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
dpaa2_dev_rx_queue_setup (dev=0x765c5c80 , rx_queue_id=0, 
nb_rx_desc=1024, socket_id=0, rx_conf=0x719db3c0, mb_pool=0x170cac180)
at ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c:675
675 ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c: No such file or directory.
(gdb) bt
#0  dpaa2_dev_rx_queue_setup (dev=0x765c5c80 , 
rx_queue_id=0, nb_rx_desc=1024, socket_id=0, rx_conf=0x719db3c0, 
mb_pool=0x170cac180)
at ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c:675
#1  0x74eaf518 in rte_eth_rx_queue_setup (port_id=0, rx_queue_id=0, 
nb_rx_desc=1024, socket_id=0, rx_conf=0x719db478, mp=0x170cac180)
at ../src-dpdk/lib/librte_ethdev/rte_ethdev.c:2115
#2  0x7596b020 in dpdk_device_setup (xd=0x7c171500) at 
/root/vpp-stable2/src/plugins/dpdk/device/common.c:133
#3  0x7598b824 in dpdk_lib_init (dm=0x765b1220 ) at 
/root/vpp-stable2/src/plugins/dpdk/device/init.c:805
#4  0x75989874 in dpdk_process (vm=0x76c00680, rt=0x7a9da280, 
f=0x0) at /root/vpp-stable2/src/plugins/dpdk/device/init.c:1840
#5  0xf727aaf4 in vlib_process_bootstrap (_a=281472624744504) at 
/root/vpp-stable2/src/vlib/main.c:1284
#6  0xf7121348 in clib_calljmp () at 
/root/vpp-stable2/src/vppinfra/longjmp.S:809
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)

(gdb) run -c /etc/vpp/startup.conf
Starting program: 
/home/abramov/vpp-stable/build-root/install-vpp_debug-native/vpp/bin/vpp -c 
/etc/vpp/startup.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[New Thread 0x716e6140 (LWP 15156)]
[New Thread 0x70ee5140 (LWP 15157)]
[New Thread 0x6bfff140 (LWP 15158)]

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
dpaa2_dev_rx_queue_setup (dev=0x765c5c80 , rx_queue_id=0, 
nb_rx_desc=1024, socket_id=0, rx_conf=0x719db3c0, mb_pool=0x170cac180)
at ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c:675
675 ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c: No such file or directory.
(gdb) bt full
#0  dpaa2_dev_rx_queue_setup (dev=0x765c5c80 , 
rx_queue_id=0, nb_rx_desc=1024, socket_id=0, rx_conf=0x719db3c0, 
mb_pool=0x170cac180)
at ../src-dpdk/drivers/net/dpaa2/dpaa2_ethdev.c:675
priv = 0x2184295180
dpni = 0x2184295080
dpaa2_q = 0x719db3f8
cfg = {destination = {id = 49536, type = DPNI_DEST_DPIO, hold_active = 0 
'\000', priority = 179 '\263'}, user_context = 281472643297432, flc = {value = 
5035480,
stash_control = 128 '\200'}, cgid = 1024}
options = 0 '\000'
flow_id = 113 'q'
bpid = 65535
i = 65535
ret = 1961568588
__func__ = "dpaa2_dev_rx_queue_setup"
#1  0x74eaf518 in rte_eth_rx_queue_setup (port_id=0, rx_queue_id=0, 
nb_rx_desc=1024, socket_id=0, rx_conf=0x719db478, mp=0x170cac180)
at ../src-dpdk/lib/librte_ethdev/rte_ethdev.c:2115
ret = 0
mbp_buf_size = 2176
dev = 0x765c5c80 
dev_info = {device = 0x7edc90, driver_name = 0x75a2c2a8 "net_dpaa2", 
if_index = 0, min_mtu = 68, max_mtu = 65535, dev_flags = 0x2184297e9c,
min_rx_bufsize = 512, max_rx_pktlen = 10240, max_lro_pkt_size = 0, 
max_rx_queues = 128, max_tx_queues = 16, max_mac_addrs = 16, max_hash_mac_addrs 
= 0,
max_vfs = 0, max_vmdq_pools = 16, rx_seg_capa = {multi_pools = 0, 
offset_allowed = 0, offset_align_log2 = 0, max_nseg = 0, reserved = 0},
rx_offload_capa = 944719, tx_offload_capa = 114847, rx_queue_offload_capa = 0, 
tx_queue_offload_capa = 0, reta_size = 0, hash_key_size = 0 '\000',
flow_type_rss_offloads = 8590196732, default_rxconf = {rx_thresh = {pthresh = 0 
'\000', hthresh = 0 '\000', wthresh = 0 '\000'}, rx_free_thresh = 0,
rx_drop_en = 0 '\000', rx

Re: [SUSPECTED SPAM] [vpp-dev] VPP crashes on LX2160A platform

2022-12-22 Thread agv100
Hello,

The current build (22.10, cross-compiled via SolidRun toolchain) crashes 
without dependency to optimization level, and, with debug enabled. shows the 
following:

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0xf6d7caac in __GI_abort () at abort.c:79
#2  0x00406fe4 in os_panic () at /work/build/vpp/src/vpp/vnet/main.c:416
#3  0xf6fa6514 in debugger () at /work/build/vpp/src/vppinfra/error.c:84
#4  0xf6fa6874 in _clib_error (how_to_die=2, 
function_name=0xf7173978 <__FUNCTION__.32141> 
"vlib_buffer_validate_alloc_free", line_number=333,
fmt=0xf7173438 "%s %U buffer 0x%x") at 
/work/build/vpp/src/vppinfra/error.c:143
#5  0xf70c1218 in vlib_buffer_validate_alloc_free (vm=0xb6d5c740, 
buffers=0xb4bac810, n_buffers=1, expected_state=VLIB_BUFFER_KNOWN_ALLOCATED)
at /work/build/vpp/src/vlib/buffer.c:332
#6  0xf716afc4 in vlib_buffer_pool_put (vm=0xb6d5c740, 
buffer_pool_index=0 '\000', buffers=0xb4bac810, n_buffers=1)
at /work/build/vpp/src/vlib/buffer_funcs.h:731
#7  0xf716b75c in vlib_buffer_free_inline (vm=0xb6d5c740, 
buffers=0xb88bd1d4, n_buffers=0, maybe_next=1) at 
/work/build/vpp/src/vlib/buffer_funcs.h:917
#8  0xf716b7c8 in vlib_buffer_free (vm=0xb6d5c740, 
buffers=0xb88bd1d0, n_buffers=1) at 
/work/build/vpp/src/vlib/buffer_funcs.h:936
#9  0xf716c424 in process_drop_punt (vm=0xb6d5c740, 
node=0xb7844300, frame=0xb88bd1c0, disposition=ERROR_DISPOSITION_DROP)
at /work/build/vpp/src/vlib/drop.c:235
#10 0xf716c4fc in error_drop_node_fn_cortexa72 (vm=0xb6d5c740, 
node=0xb7844300, frame=0xb88bd1c0) at 
/work/build/vpp/src/vlib/drop.c:251
#11 0xf70f512c in dispatch_node (vm=0xb6d5c740, 
node=0xb7844300, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0xb88bd1c0,
last_time_stamp=233164692224) at /work/build/vpp/src/vlib/main.c:960
#12 0xf70f585c in dispatch_pending_node (vm=0xb6d5c740, 
pending_frame_index=4, last_time_stamp=233164692224) at 
/work/build/vpp/src/vlib/main.c:1119
#13 0xf70f6be8 in vlib_main_or_worker_loop (vm=0xb6d5c740, 
is_main=1) at /work/build/vpp/src/vlib/main.c:1588
#14 0xf70f71ec in vlib_main_loop (vm=0xb6d5c740) at 
/work/build/vpp/src/vlib/main.c:1716
#15 0xf70f7d1c in vlib_main (vm=0xb6d5c740, input=0xb4badfc8) 
at /work/build/vpp/src/vlib/main.c:2010
#16 0xf7145044 in thread0 (arg=281473749206848) at 
/work/build/vpp/src/vlib/unix/main.c:667
#17 0xf6fb84c0 in clib_calljmp () at 
/work/build/vpp/src/vppinfra/longjmp.S:809
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22367): https://lists.fd.io/g/vpp-dev/message/22367
Mute This Topic: https://lists.fd.io/mt/95380982/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP crashes on LX2160A platform

2022-12-10 Thread agv100
Hoi Pim,

VPP runs on this board, but with some tricky things.

1. In the past, VPP had build-data/platform/dpaa.mk file , which is build 
parameters for DPDK for dpaa2-based system. It was removed from fdio/vpp 
repository few years ago.
For cross-compiling VPP to that platfom, either for native compilation on the 
platform, you should get it and put to your git clone of recent VPP.

2. Also, you need to find "LSDK" - ported version on platform code repository, 
it also quite outdated (19.something latest), and then copy 
src/plugins/dpdk/buffer.c file to you recent clone. Otherwize VPP will segfault 
on start if it will see interfaces.

Then, you need to build vpp with PLATFORM=dpaa2.

After installation and before start, you should have DPNI interfaces, connected 
to platform DPMACs,  either statically from device path line files, or 
dynamically by restool/scripts. Note, they should not be bound to Linux Kernel 
interfaces. You may use scripts from platform SDK DPDK distribution, or assign 
resources by restool manually.
Then,
export DPRC=dprc.X container env variable, and start VPP,  it will start, it 
will see interfaces, may be even forward packets, but lack of stability make it 
unusefull.

Which system you use on board? I do not know why, but for some reason on board 
with SolidRun binary from June,2022 and earlier VPP sees incoming packets and 
can forward them with amazing performance, but really unstable, especially if 
you try to use Linux-CP plugin, segfaults or hangs each few minutes.
On fresh binaries, as well as Ubuntu-Core currently built from SolidRun 
scripts, VPP cannot see any incoming packets.

I was trying different compilers, different approaches to build, with no luck 
to get it working well.
Then, VPP will start, but I did not manage it to work more or less stable yet

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22310): https://lists.fd.io/g/vpp-dev/message/22310
Mute This Topic: https://lists.fd.io/mt/95379828/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP crashes on LX2160A platform

2022-12-01 Thread agv100
Dear VPP community,

I'm trying to operate VPP on SolidRun LX2160 board, which is based on 16 cores 
A72 NXP SoC, unfortunately, with little success. Does anybody have any 
experience with running VPP on such boards?

The performance in my tests is quite good (more then 4mpps NDR) , but VPP works 
very unstable and segfaults in time interval from seconds to hours after start.
The events causing segfaults were not identified. It may happen (and usually) 
when you walk through CLI. It may happen (less frequently) when just forwarding 
packets without a touch to vppctl. Applying config longer then few lines 
usually cause that. Second VPPCTL connection usually couses that,

I was trying the following versions of VPP with literally same results:

- VPP 21.01 from LSDK distribution, built on the board natively
- VPP 22.10, from Master branch, crossbuilt using 
https://docs.nxp.com/bundle/GUID-87AD3497-0BD4-4492-8040-3F3BE0F2B087/page/GUID-8A75A4AD-2EB9-4A5A-A784-465B98E67951.html
- VPP 22.08, built using flexbuild tool (from same link above).

I was trying different settings of main_heap memory pool (size, pagesize), 
different hugepages settings (standard 4k, huge 2M, huge 1G), but there were no 
serious improvement. It looks like 22.08 most stable and may last for few hours.

As performance looks promising, I'm really looking forward to make it work 
stable. Can somebody please  advice , where do I need to look at  to fix the 
problem? There are , according to CSIT, good results on other ARM v8 platforms.
As for OS, I'm using pre-built Ubuntu Core-based distribution from SolidRun.

See below OS information, logs with crash. See in attachement: Platform dmesg 
and GDB trace of 22.10 crash.
Below are system logs of VPP crashes.

abramov@nc2s5:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS"
abramov@nc2s5:~$ uname -a
Linux nc2s5 5.10.35-00018-gbb124648d42c #1 SMP PREEMPT Wed May 11 17:07:05 UTC 
2022 aarch64 aarch64 aarch64 GNU/Linux
abramov@nc2s5:~$

Dec 01 10:35:42 nc2s5 vnet[2259]: received signal SIGSEGV, PC unsupported, 
faulting address 0x2d3ba50a885
Dec 01 10:35:42 nc2s5 vnet[2259]: #0  0xa7df2e2c 0xa7df2e2c
Dec 01 10:35:42 nc2s5 vnet[2259]: #1  0xa95ad588 0xa95ad588
Dec 01 10:35:42 nc2s5 vnet[2259]: #2  0xa7da0090 
vlib_node_runtime_sync_stats + 0x0
Dec 01 10:35:42 nc2s5 vnet[2259]: #3  0xa7da191c vlib_node_sync_stats + 
0x4c
Dec 01 10:35:42 nc2s5 vnet[2259]: #4  0xa7dd973c 
vlib_worker_thread_barrier_release + 0x45c
Dec 01 10:35:42 nc2s5 vnet[2259]: #5  0xa7de6ef4 0xa7de6ef4
Dec 01 10:35:42 nc2s5 vnet[2259]: #6  0xa7de827c 0xa7de827c
Dec 01 10:35:42 nc2s5 vnet[2259]: #7  0xa7df00dc 0xa7df00dc
Dec 01 10:35:42 nc2s5 vnet[2259]: #8  0xa7da5e04 vlib_main + 0x8f4
Dec 01 10:35:42 nc2s5 vnet[2259]: #9  0xa7df1d8c 0xa7df1d8c
Dec 01 10:35:42 nc2s5 vnet[2259]: #10 0xa7c36f8c clib_calljmp + 0x24

Dec 01 10:26:56 nc2s5 vnet[2232]: received signal SIGSEGV, PC unsupported, 
faulting address 0x208
Dec 01 10:26:56 nc2s5 vnet[2232]: #0  0xa4bebe2c 0xa4bebe2c
Dec 01 10:26:56 nc2s5 vnet[2232]: #1  0xa63a6588 0xa63a6588
Dec 01 10:26:56 nc2s5 vnet[2232]: #2  0xa6340aa8 0xa6340aa8
Dec 01 10:26:56 nc2s5 vnet[2232]: #3  0xa4b9f150 vlib_main + 0xc40
Dec 01 10:26:56 nc2s5 vnet[2232]: #4  0xa4bead8c 0xa4bead8c
Dec 01 10:26:56 nc2s5 vnet[2232]: #5  0xa4a2ff8c clib_calljmp + 0x24
Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0xf67500f4 in vlib_node_runtime_update (next_index=, 
node_index=518, vm=0x363bf700)
at /home/k.lakaev/vpp/vpp-github/src/vlib/node.c:122
122 /home/k.lakaev/vpp/vpp-github/src/vlib/node.c: No such file or 
directory.


(gdb) bt full
#0  0xf67500f4 in vlib_node_runtime_update (next_index=, 
node_index=518, vm=0x363bf700)
at /home/k.lakaev/vpp/vpp-github/src/vlib/node.c:122
nm = 0x363bf8c8
j = 
r = 
node = 
pf = 
s = 
next_node = 
nf = 
i = 1501
n_insert = 
nm = 
r = 
s = 
node = 
next_node = 
nf = 
pf = 
i = 
j = 
n_insert = 
__FUNCTION__ = 
#1  vlib_node_add_next_with_slot (vm=0x363bf700, 
node_index=node_index@entry=518, next_node_index=692, slot=2,
slot@entry=18446744073709551615) at 
/home/k.lakaev/vpp/vpp-github/src/vlib/node.c:217
nm = 0x363bf8c8
node = 0x4a131710
next = 0x58bf3e20
old_next = 
--Type  for more, q to quit, c to continue without paging--
old_next_index = 
p = 
__FUNCTION__ = "vlib_node_add_next_with_slot"
#2  0xf70fe618 in vlib_node_add_next (next_node=, 
node=518, vm=)
at /home/k.lakaev/vpp/vpp-github/src/vlib/node_funcs.h:1273
No