[vpp-dev] VPP on a Bluefield-2 smartNIC

2021-06-30 Thread Pierre Louis Aublin

Dear VPP developers

I would like to run VPP on the Bluefield-2 smartNIC, but even though I 
managed to compile it the interface doesn't show up inside the CLI. By 
any chance, would you know how to compile and configure vpp for this device?


I am using VPP v21.06-rc2 and did the following modifications so that it 
can compile:

```
diff --git a/build/external/packages/dpdk.mk 
b/build/external/packages/dpdk.mk

index c7eb0fc3f..31a5c764e 100644
--- a/build/external/packages/dpdk.mk
+++ b/build/external/packages/dpdk.mk
@@ -15,8 +15,8 @@ DPDK_PKTMBUF_HEADROOM    ?= 128
 DPDK_USE_LIBBSD  ?= n
 DPDK_DEBUG   ?= n
 DPDK_MLX4_PMD    ?= n
-DPDK_MLX5_PMD    ?= n
-DPDK_MLX5_COMMON_PMD ?= n
+DPDK_MLX5_PMD    ?= y
+DPDK_MLX5_COMMON_PMD ?= y
 DPDK_TAP_PMD ?= n
 DPDK_FAILSAFE_PMD    ?= n
 DPDK_MACHINE ?= default
diff --git a/build/external/packages/ipsec-mb.mk 
b/build/external/packages/ipsec-mb.mk

index d0bd2af19..119eb5219 100644
--- a/build/external/packages/ipsec-mb.mk
+++ b/build/external/packages/ipsec-mb.mk
@@ -34,7 +34,7 @@ define  ipsec-mb_build_cmds
  SAFE_DATA=n \
  PREFIX=$(ipsec-mb_install_dir) \
  NASM=$(ipsec-mb_install_dir)/bin/nasm \
- EXTRA_CFLAGS="-g -msse4.2" > $(ipsec-mb_build_log)
+ EXTRA_CFLAGS="-g" > $(ipsec-mb_build_log)
 endef

 define  ipsec-mb_install_cmds
```


However, when running the VPP CLI, the network interface does not show up:
```
$ sudo -E make run
clib_sysfs_prealloc_hugepages:261: pre-allocating 6 additional 2048K 
hugepages on numa node 0
dpdk   [warn  ]: Unsupported PCI device 0x15b3:0xa2d6 found 
at PCI address :03:00.0


dpdk/cryptodev [warn  ]: dpdk_cryptodev_init: Failed to configure 
cryptodev
vat-plug/load  [error ]: vat_plugin_register: oddbuf plugin not 
loaded...

    ___    _    _   _  ___
 __/ __/ _ \  (_)__    | | / / _ \/ _ \
 _/ _// // / / / _ \   | |/ / ___/ ___/
 /_/ /(_)_/\___/   |___/_/  /_/

DBGvpp# show int
  Name   Idx    State  MTU 
(L3/IP4/IP6/MPLS) Counter  Count

local0    0 down 0/0/0/0
DBGvpp# sh hard
  Name    Idx   Link  Hardware
local0 0    down  local0
  Link speed: unknown
  local
```


The dpdk-testpmd application seems to start correctly though:
```
$ sudo ./build-root/install-vpp_debug-native/external/bin/dpdk-testpmd 
-l 0-2 -a :03:00.00 -- -i --nb-cores=2 --nb-ports=1 
--total-num-mbufs=2048

EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 32768 kB hugepages reported
EAL: No available 64 kB hugepages reported
EAL: No available 1048576 kB hugepages reported
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: mlx5_pci (15b3:a2d6) device: :03:00.0 (socket 0)
mlx5_pci: Failed to allocate Tx DevX UAR (BF)
mlx5_pci: Failed to allocate Rx DevX UAR (BF)
mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.
Interactive-mode selected
testpmd: create a new mbuf pool : n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last 
port will pair with itself.


Configuring Port 0 (socket 0)
Port 0: 0C:42:A1:A4:89:B4
Checking link statuses...
Done
testpmd>
```

Is the problem related to the failure to allocate Tx and Rx DevX UAR? 
How can I fix this?



I've also tried to set the Bluefield configuration parameters from dpdk 
(https://github.com/DPDK/dpdk/blob/e2a234488854fdeee267a2aa582aa082fce01d6e/config/defconfig_arm64-bluefield-linuxapp-gcc) 
as follows:

```
diff --git a/build-data/packages/vpp.mk b/build-data/packages/vpp.mk
index 7db450e05..91017dda0 100644
--- a/build-data/packages/vpp.mk
+++ b/build-data/packages/vpp.mk
@@ -32,7 +32,8 @@ vpp_cmake_args += -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON
 endif
 ifeq (,$(TARGET_PLATFORM))
 ifeq ($(MACHINE),aarch64)
-vpp_cmake_args += -DVPP_LOG2_CACHE_LINE_SIZE=7
+vpp_cmake_args += -DVPP_LOG2_CACHE_LINE_SIZE=6
 endif
 endif

diff --git a/build/external/packages/dpdk.mk 
b/build/external/packages/dpdk.mk

index 70ff5c90e..e2a64e67c 100644
--- a/build/external/packages/dpdk.mk
+++ b/build/external/packages/dpdk.mk
@@ -15,13 +15,20 @@ DPDK_PKTMBUF_HEADROOM    ?= 128
 DPDK_USE_LIBBSD  ?= n
 DPDK_DEBUG   ?= n
 DPDK_MLX4_PMD    ?= n
-DPDK_MLX5_PMD    ?= n
-DPDK_MLX5_COMMON_PMD ?= n
+DPDK_MLX5_PMD    ?= y
+DPDK_MLX5_COMMON_PMD ?= y
 DPDK_TAP_PMD ?= n
 DPDK_FAILSAFE_PMD    ?= n
 DPDK_MACHINE ?= default
 DPDK_MLX_IBV_LINK    ?= static

+# bluefield specific

Re: [vpp-dev] VPP release 21.06 is complete!

2021-06-30 Thread Dave Wallace
Congratulations to the FD.io community and all who contributed to yet 
another on time VPP release!


Special thanks to Andrew for his work in automating and streamlining the 
release process.


Thanks,
-daw-

On 6/30/2021 3:18 PM, Andrew Yourtchenko wrote:

Hi all,

VPP release 21.06 is complete and is available from the usual
packagecloud.io/fdio/release location!

I have verified using the scripts [0] that the new release installs
and runs on the Centos8, Debian 10 (Buster) as well as Ubuntu 18.04
and 20.04.

Special shout-out goes to pnat plugin which has the hardcoded version of 0.0.1,
hopefully it will be rectified for 21.10 release :-)

The release notes are visible at
https://docs.fd.io/vpp/21.06/d5/d36/release_notes_2106.html

A small remark: if you are installing on Debian or Ubuntu 20.04 in a container,
you might need to "export VPP_INSTALL_SKIP_SYSCTL=1" before
installation such that it skips calling the sysctl command which will
fail.

Please let me know if you experience any  issues.

Thanks a lot to Dave Wallace, Florin Coras and Vanessa Valderrama for their help
during the release preparations and release process !

[0] https://github.com/ayourtch/vpp-relops/tree/master/docker-tests

--a /* your friendly 21.06 release manager */






-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19666): https://lists.fd.io/g/vpp-dev/message/19666
Mute This Topic: https://lists.fd.io/mt/83900236/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [**EXTERNAL**] RE: [vpp-dev] #vpp #vnet os_panic for failed barrier timeout

2021-06-30 Thread Bly, Mike via lists.fd.io
Agreed on doing the upgrade to newer code base being prudent and yes, we are in 
the midst of doing so. However, I did not see any obvious changes in this area, 
so I am a bit pessimistic on upgrade being the fix. Perhaps I missed a subtle 
improvement in this area folks could point me at to ease my concerns?

Regarding 2nd paragraph comments/questions, yes, it is a single worker and I 
too would like to know why the main thread did not just move on instead of 
throwing the os_panic().

-Mike

From: v...@barachs.net 
Sent: Thursday, June 24, 2021 5:46 AM
To: Bly, Mike ; vpp-dev@lists.fd.io
Subject: [**EXTERNAL**] RE: [vpp-dev] #vpp #vnet os_panic for failed barrier 
timeout

Given the reported MTBF of 9 months and nearly 2-year-old software, switching 
to 21.01 [and then to 21.06 when released] seems like the only sensible next 
step.

>From the gdb info provided, it looks like there is one worker thread. Is that 
>correct? If so, the "workers_at_barrier" count seems correct, so why wouldn't 
>the main thread have moved on instead of spinning waiting for something which 
>already happened?

D.


From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Bly, Mike via 
lists.fd.io
Sent: Wednesday, June 23, 2021 10:59 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] #vpp #vnet os_panic for failed barrier timeout

We are looking for advise on whether this os_panic() for a barrier timeout has 
anyone looking at it. We see in the forum many instances of type of main thread 
back-trace. For this incedent, referencing the sw_interface_dump API we created 
a lighter oper-get call to simply fetch link state vs. all of the extensive 
information the dump command fetches for each interface. At the time we added 
our new oper-get function,  we overlooked the "is_mp_safe" enablement for dump 
and as such did NOT set it for our new oper-get. The end result is a fairly 
light API that requires barrier support. When this issue occurred the 
configuration was using a single separate worker thread so the API is waiting 
for a barrier count of 1. Interestingly, the BT analysis shows the count value 
was met, which implies some deeper issue. Why did a single worker, with at most 
10s of packets per second workload at the time fail to stall at a barrier 
within the allotted one second timeout value? And, even more fun to answer is 
why we even reached the os_panic call as the BT shows the worker was stalled at 
the barrier. Please refer to GDB analysis at bottom of this email.

This code is based on 19.08. We are in the process of upgrading to 21.01, but 
in review of the forum posts, this type of BT is seen across many versions. 
This is an extremely rare event. We had one occurrence in September of last 
year that we could not reproduce and then just had a second occurrence this 
week. As such, we are not able to reproduce this on demand, let alone in stock 
VPP code given this is a new API.

While we could simply enable is_mp_safe as done for sw_interface_dump to avoid 
the issue, we are troubled at not being able to explain why the os_panic 
occurred in the first place. As such, we are hoping someone might be able to 
provide guidance here on next steps. What additional details from the core-file 
can we provide?


Thread 1 backtrace

#0 __GI_raise (sig=sig@entry=6) at 
/usr/src/debug/glibc/2.30-r0/git/sysdeps/unix/sysv/linux/raise.c:50
#1 0x003cb8425548 in __GI_abort () at 
/usr/src/debug/glibc/2.30-r0/git/stdlib/abort.c:79
#2 0x004075da in os_exit () at 
/usr/src/debug/vpp/19.08+gitAUTOINC+6641eb3e8f-r0/git/src/vpp/vnet/main.c:379
#3 0x7ff1f5740794 in unix_signal_handler (signum=, 
si=, uc=)
at 
/usr/src/debug/vpp/19.08+gitAUTOINC+6641eb3e8f-r0/git/src/vlib/unix/main.c:183
#4 
#5 __GI_raise (sig=sig@entry=6) at 
/usr/src/debug/glibc/2.30-r0/git/sysdeps/unix/sysv/linux/raise.c:50
#6 0x003cb8425548 in __GI_abort () at 
/usr/src/debug/glibc/2.30-r0/git/stdlib/abort.c:79
#7 0x00407583 in os_panic () at 
/usr/src/debug/vpp/19.08+gitAUTOINC+6641eb3e8f-r0/git/src/vpp/vnet/main.c:355
#8 0x7ff1f5728643 in vlib_worker_thread_barrier_sync_int (vm=0x7ff1f575ba40 
, func_name=)
at /usr/src/debug/vpp/19.08+gitAUTOINC+6641eb3e8f-r0/git/src/vlib/threads.c:1476
#9 0x7ff1f62c6d56 in vl_msg_api_handler_with_vm_node 
(am=am@entry=0x7ff1f62d8d40 , the_msg=0x1300ba738,
vm=vm@entry=0x7ff1f575ba40 , node=node@entry=0x7ff1b588c000)
at 
/usr/src/debug/vpp/19.08+gitAUTOINC+6641eb3e8f-r0/git/src/vlibapi/api_shared.c:583
#10 0x7ff1f62b1237 in void_mem_api_handle_msg_i (am=, 
q=, node=0x7ff1b588c000,
vm=0x7ff1f575ba40 ) at 
/usr/src/debug/vpp/19.08+gitAUTOINC+6641eb3e8f-r0/git/src/vlibmemory/memory_api.c:712
#11 vl_mem_api_handle_msg_main (vm=vm@entry=0x7ff1f575ba40 , 
node=node@entry=0x7ff1b588c000)
at 
/usr/src/debug/vpp/19.08+gitAUTOINC+6641eb3e8f-r0/git/src/vlibmemory/memory_api.c:722
#12 0x7ff1f62be713 in vl_api_clnt_process (f=, 
node=, 

[vpp-dev] VPP release 21.06 is complete!

2021-06-30 Thread Andrew Yourtchenko
Hi all,

VPP release 21.06 is complete and is available from the usual
packagecloud.io/fdio/release location!

I have verified using the scripts [0] that the new release installs
and runs on the Centos8, Debian 10 (Buster) as well as Ubuntu 18.04
and 20.04.

Special shout-out goes to pnat plugin which has the hardcoded version of 0.0.1,
hopefully it will be rectified for 21.10 release :-)

The release notes are visible at
https://docs.fd.io/vpp/21.06/d5/d36/release_notes_2106.html

A small remark: if you are installing on Debian or Ubuntu 20.04 in a container,
you might need to "export VPP_INSTALL_SKIP_SYSCTL=1" before
installation such that it skips calling the sysctl command which will
fail.

Please let me know if you experience any  issues.

Thanks a lot to Dave Wallace, Florin Coras and Vanessa Valderrama for their help
during the release preparations and release process !

[0] https://github.com/ayourtch/vpp-relops/tree/master/docker-tests

--a /* your friendly 21.06 release manager */

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19664): https://lists.fd.io/g/vpp-dev/message/19664
Mute This Topic: https://lists.fd.io/mt/83900236/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-06-30 Thread Mechthild Buescher via lists.fd.io
Hi Ben,

Thanks for your fast reply. Here is the requested output (I skipped config for 
other interfaces and VLANs)

vppctl show int addr
NCIC-1-v1 (up):
NCIC-1-v1.1 (up):
  L3 10.10.203.1/29 ip4 table-id 1 fib-idx 4
host-Vpp2Host (up):
host-Vpp2Host.4093 (up):
  L3 198.19.255.249/29 ip4 table-id 4093 fib-idx 3
local0 (dn):

and:
vppctl sh ip fib 198.19.255.253
pv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[default-route:1, ]
0.0.0.0/0 fib:0 index:0 locks:2
  default-route refs:1 entry-flags:drop, src-flags:added,contributing,active,
path-list:[0] locks:2 flags:drop, uPRF-list:0 len:0 itfs:[]
  path:[0] pl-index:0 ip4 weight=1 pref=0 special:  cfg-flags:drop,
[@0]: dpo-drop ip4

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
ipv4-VRF:4093, fib_index:3, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[CLI:2, adjacency:1, recursive-resolution:1, ]
198.19.255.253/32 fib:3 index:189 locks:2
  adjacency refs:1 entry-flags:attached, src-flags:added,contributing,active, 
cover:53
path-list:[206] locks:2 uPRF-list:210 len:1 itfs:[14, ]
  path:[241] pl-index:206 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved,
198.19.255.253 host-Vpp2Host.4093
  [@0]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4 
flags:[] e2de891fcccb02fe33d4dc0d81000ffd0800
Extensions:
 path:241 adj-flags:[refines-cover]
 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:190 buckets:1 uRPF:210 to:[2:192] 
via:[6:504]]
[0] [@5]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4 
flags:[] e2de891fcccb02fe33d4dc0d81000ffd0800
ipv4-VRF:1, fib_index:4, flow hash:[src dst sport dport proto flowlabel ] 
epoch:0 flags:none locks:[API:1, CLI:2, adjacency:1, recursive-resolution:1, ]
198.19.255.253/32 fib:4 index:190 locks:2
  CLI refs:1 entry-flags:attached, src-flags:added,contributing,active,
path-list:[207] locks:2 flags:shared, uPRF-list:211 len:1 itfs:[14, ]
  path:[243] pl-index:207 ip4 weight=1 pref=0 attached-nexthop:  
oper-flags:resolved, cfg-flags:attached,
198.19.255.253 host-Vpp2Host.4093
  [@0]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4 
flags:[] e2de891fcccb02fe33d4dc0d81000ffd0800

 forwarding:   unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:191 buckets:1 uRPF:211 to:[5:480]]
[0] [@5]: ipv4 via 198.19.255.253 host-Vpp2Host.4093: mtu:9000 next:4 
flags:[] e2de891fcccb02fe33d4dc0d81000ffd0800

BR/Mechthild

-Original Message-
From: Benoit Ganne (bganne)  
Sent: Wednesday, 30 June 2021 18:16
To: Mechthild Buescher ; vpp-dev@lists.fd.io
Subject: RE: next-hop-table between two FIB tables results in punt and 'unknown 
ip protocol'

>From the trace output, it looks like VPP thinks 198.19.255.253 is one of its 
>interface address, and hence try to deliver it locally. As there is no 
>configured listener for TCP packets, it default to punting, and as there is no 
>punt rule it drops.

Can you share the output of 'show int addr' and 'sh ip fib 198.19.255.253'?

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Mechthild 
> Buescher via lists.fd.io
> Sent: mercredi 30 juin 2021 18:06
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] next-hop-table between two FIB tables results in 
> punt and 'unknown ip protocol'
> 
> Hi all,
> 
> 
> 
> we are using VPP with several FIB tables and when we use 'next-hop-table'
> the ip4-lookup results somehow in 'unknown ip protocol'. Can you 
> please help?
> 
> 
> 
> Our setup:
> 
> * 1 (out of 2) with VPP and a DPDK interface
> 
> 
> 
> The VPP version is (both nodes):
> 
> 
> vpp# show version verbose
> 
> Version:  v21.06-rc2~6-gf377e9545
> 
> Compiled by:  suse
> 
> Compile host: SUSE
> 
> Compile date: 2021-06-24T14:02:01
> 
> Compile location: /root/vpp-32298/vpp
> 
> Compiler: GCC 7.5.0
> 
> Current PID:  22527
> 
> 
> 
> The VPP config uses the DPDK interface (both nodes):
> 
> 
> vpp# show hardware-interfaces NCIC-1-v1
> 
>   NameIdx   Link  Hardware
> 
> NCIC-1-v1  3 up   NCIC-1-v1
> 
>   Link speed: 40 Gbps
> 
>   RX Queues:
> 
> queue thread mode
> 
> 0 vpp_wk_0 (1)   polling
> 
>   Ethernet address 72:a6:1e:ae:cd:f1
> 
>   Intel iAVF
> 
> carrier up full duplex mtu 9206
> 
> flags: admin-up pmd maybe-multiseg subif tx-offload 
> intel-phdr-cksum rx-ip4-cksum int-unmaskable
> 
> Devargs:
> 
> rx: queues 1 (max 256), desc 1024 (min 64 max 4096 align 32)
> 
> tx: queues 3 (max 256), desc 1024 (min 64 max 4096 align 32)
> 
> pci: device 8086:154c subsystem 1028: address :17:0e.01 
> numa 0
> 
> max rx packet len: 9728
> 
> promiscuous: 

Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-06-30 Thread Neale Ranns
Hi Mechthild,

What Benoit said about punting. You might also find this useful:
https://github.com/FDio/vpp/blob/master/src/plugins/linux-cp/FEATURE.yaml

plus inline …

From: vpp-dev@lists.fd.io  on behalf of Mechthild Buescher 
via lists.fd.io 
Date: Wednesday, 30 June 2021 at 18:06
To: vpp-dev@lists.fd.io 
Subject: [vpp-dev] next-hop-table between two FIB tables results in punt and 
'unknown ip protocol'
Hi all,

we are using VPP with several FIB tables and when we use ‘next-hop-table’ the 
ip4-lookup results somehow in ‘unknown ip protocol’. Can you please help?

Our setup:

· 1 (out of 2) with VPP and a DPDK interface

The VPP version is (both nodes):

vpp# show version verbose
Version:  v21.06-rc2~6-gf377e9545
Compiled by:  suse
Compile host: SUSE
Compile date: 2021-06-24T14:02:01
Compile location: /root/vpp-32298/vpp
Compiler: GCC 7.5.0
Current PID:  22527

The VPP config uses the DPDK interface (both nodes):

vpp# show hardware-interfaces NCIC-1-v1
  NameIdx   Link  Hardware
NCIC-1-v1  3 up   NCIC-1-v1
  Link speed: 40 Gbps
  RX Queues:
queue thread mode
0 vpp_wk_0 (1)   polling
  Ethernet address 72:a6:1e:ae:cd:f1
  Intel iAVF
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg subif tx-offload intel-phdr-cksum 
rx-ip4-cksum int-unmaskable
Devargs:
rx: queues 1 (max 256), desc 1024 (min 64 max 4096 align 32)
tx: queues 3 (max 256), desc 1024 (min 64 max 4096 align 32)
pci: device 8086:154c subsystem 1028: address :17:0e.01 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter jumbo-frame scatter rss-hash
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: udp-cksum tcp-cksum multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other ipv4
   ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other ipv6
rss active:none
tx burst function: iavf_xmit_pkts
rx burst function: iavf_recv_scattered_pkts_vec_avx2

The VPP config is (there is a veth-pair configured on the host):

create host-interface name Vpp2Host
set interface state host-Vpp2Host up

ip table add 4093
create sub-interfaces host-Vpp2Host 4093
set interface state host-Vpp2Host.4093 up
set interface ip table host-Vpp2Host.4093 4093
set interface ip address host-Vpp2Host.4093 198.19.255.249/29

198.19.255.249 is configured on this interface

set interface state NCIC-1-v1 up
ip table add 1
create sub-interfaces NCIC-1-v1 1
set interface state NCIC-1-v1.1 up
set interface ip table NCIC-1-v1.1 1
set interface ip address NCIC-1-v1.1 10.10.203.3/29
ip route add 198.19.255.248/29 table 1 via 198.19.255.249 next-hop-table 4093

but it’s used as a next-hop here. It works, as the trace demonstrates, but it’s 
not the most efficient way to do things. next-hops are meant to specify the 
remote host to send to/via. Are you trying to receive packets sent to 
host-Vpp2Host.4093’s subnet if they are received on NCIC-1v1.1? If so maybe you 
need an extranet; you leak the routes of one VRF into the other. The 
interface’s connected prefixes require special attention, see: 
https://github.com/FDio/vpp/blob/master/docs/gettingstarted/developers/fib20/attachedexport.rst

tl;dr. do this:
  ip route add 198.19.255.248/29 table 1 via host-Vpp2Host.4093

/neale


When a packet is received (curl -k -vv https://198.19.255.253:8443/... ) we see 
the following trace on dpdk-input:
  NCIC-1-v1 rx queue 0
  buffer 0x14296: current data 0, length 78, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle 0x116
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 2, nb_segs 1, pkt_len 78
buf_len 2176, data_len 78, ol_flags 0x180, data_off 128, phys_addr 0x50a600
packet_type 0x191 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without 
extension headers
  RTE_PTYPE_L4_TCP (0x0100) TCP packet
  IP4: 86:ca:f3:a1:20:fc -> 1e:a0:ab:00:2a:ea 802.1q vlan 1
  TCP: 172.17.41.8 -> 198.19.255.253
tos 0x00, ttl 64, length 60, checksum 0x7bc3 dscp 

Re: [vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-06-30 Thread Benoit Ganne (bganne) via lists.fd.io
>From the trace output, it looks like VPP thinks 198.19.255.253 is one of its 
>interface address, and hence try to deliver it locally. As there is no 
>configured listener for TCP packets, it default to punting, and as there is no 
>punt rule it drops.

Can you share the output of 'show int addr' and 'sh ip fib 198.19.255.253'?

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Mechthild
> Buescher via lists.fd.io
> Sent: mercredi 30 juin 2021 18:06
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] next-hop-table between two FIB tables results in punt
> and 'unknown ip protocol'
> 
> Hi all,
> 
> 
> 
> we are using VPP with several FIB tables and when we use 'next-hop-table'
> the ip4-lookup results somehow in 'unknown ip protocol'. Can you please
> help?
> 
> 
> 
> Our setup:
> 
> * 1 (out of 2) with VPP and a DPDK interface
> 
> 
> 
> The VPP version is (both nodes):
> 
> 
> vpp# show version verbose
> 
> Version:  v21.06-rc2~6-gf377e9545
> 
> Compiled by:  suse
> 
> Compile host: SUSE
> 
> Compile date: 2021-06-24T14:02:01
> 
> Compile location: /root/vpp-32298/vpp
> 
> Compiler: GCC 7.5.0
> 
> Current PID:  22527
> 
> 
> 
> The VPP config uses the DPDK interface (both nodes):
> 
> 
> vpp# show hardware-interfaces NCIC-1-v1
> 
>   NameIdx   Link  Hardware
> 
> NCIC-1-v1  3 up   NCIC-1-v1
> 
>   Link speed: 40 Gbps
> 
>   RX Queues:
> 
> queue thread mode
> 
> 0 vpp_wk_0 (1)   polling
> 
>   Ethernet address 72:a6:1e:ae:cd:f1
> 
>   Intel iAVF
> 
> carrier up full duplex mtu 9206
> 
> flags: admin-up pmd maybe-multiseg subif tx-offload intel-phdr-cksum
> rx-ip4-cksum int-unmaskable
> 
> Devargs:
> 
> rx: queues 1 (max 256), desc 1024 (min 64 max 4096 align 32)
> 
> tx: queues 3 (max 256), desc 1024 (min 64 max 4096 align 32)
> 
> pci: device 8086:154c subsystem 1028: address :17:0e.01 numa 0
> 
> max rx packet len: 9728
> 
> promiscuous: unicast off all-multicast on
> 
> vlan offload: strip off filter off qinq off
> 
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-
> strip
> 
>outer-ipv4-cksum vlan-filter jumbo-frame scatter
> rss-hash
> 
> rx offload active: ipv4-cksum jumbo-frame scatter
> 
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-
> cksum
> 
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
> 
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
> 
>mbuf-fast-free
> 
> tx offload active: udp-cksum tcp-cksum multi-segs
> 
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
> ipv4
> 
>ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
> ipv6
> 
> rss active:none
> 
> tx burst function: iavf_xmit_pkts
> 
> rx burst function: iavf_recv_scattered_pkts_vec_avx2
> 
> 
> 
> The VPP config is (there is a veth-pair configured on the host):
> 
> 
> 
> create host-interface name Vpp2Host
> 
> set interface state host-Vpp2Host up
> 
> 
> 
> ip table add 4093
> 
> create sub-interfaces host-Vpp2Host 4093
> 
> set interface state host-Vpp2Host.4093 up
> 
> set interface ip table host-Vpp2Host.4093 4093
> 
> set interface ip address host-Vpp2Host.4093 198.19.255.249/29
> 
> 
> 
> set interface state NCIC-1-v1 up
> 
> ip table add 1
> 
> create sub-interfaces NCIC-1-v1 1
> 
> set interface state NCIC-1-v1.1 up
> 
> set interface ip table NCIC-1-v1.1 1
> 
> set interface ip address NCIC-1-v1.1 10.10.203.3/29
> 
> ip route add 198.19.255.248/29 table 1 via 198.19.255.249 next-hop-table
> 4093
> 
> 
> 
> When a packet is received (curl -k -vv https://198.19.255.253:8443/... )
> we see the following trace on dpdk-input:
> 
>   NCIC-1-v1 rx queue 0
> 
>   buffer 0x14296: current data 0, length 78, buffer-pool 0, ref-count 1,
> totlen-nifb 0, trace handle 0x116
> 
>   ext-hdr-valid
> 
>   l4-cksum-computed l4-cksum-correct
> 
>   PKT MBUF: port 2, nb_segs 1, pkt_len 78
> 
> buf_len 2176, data_len 78, ol_flags 0x180, data_off 128, phys_addr
> 0x50a600
> 
> packet_type 0x191 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> 
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> 
> Packet Offload Flags
> 
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
> 
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
> 
> Packet Types
> 
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
> 
>   RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without
> extension headers
> 
>   RTE_PTYPE_L4_TCP (0x0100) TCP packet
> 
>   IP4: 86:ca:f3:a1:20:fc -> 1e:a0:ab:00:2a:ea 802.1q vlan 1
> 
>   TCP: 172.17.41.8 -> 198.19.255.253
> 
> tos 0x00, ttl 64, length 60, checksum 0x7bc3 dscp CS0 ecn 

[vpp-dev] next-hop-table between two FIB tables results in punt and 'unknown ip protocol'

2021-06-30 Thread Mechthild Buescher via lists.fd.io
Hi all,

we are using VPP with several FIB tables and when we use 'next-hop-table' the 
ip4-lookup results somehow in 'unknown ip protocol'. Can you please help?

Our setup:

  *   1 (out of 2) with VPP and a DPDK interface

The VPP version is (both nodes):

vpp# show version verbose
Version:  v21.06-rc2~6-gf377e9545
Compiled by:  suse
Compile host: SUSE
Compile date: 2021-06-24T14:02:01
Compile location: /root/vpp-32298/vpp
Compiler: GCC 7.5.0
Current PID:  22527

The VPP config uses the DPDK interface (both nodes):

vpp# show hardware-interfaces NCIC-1-v1
  NameIdx   Link  Hardware
NCIC-1-v1  3 up   NCIC-1-v1
  Link speed: 40 Gbps
  RX Queues:
queue thread mode
0 vpp_wk_0 (1)   polling
  Ethernet address 72:a6:1e:ae:cd:f1
  Intel iAVF
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg subif tx-offload intel-phdr-cksum 
rx-ip4-cksum int-unmaskable
Devargs:
rx: queues 1 (max 256), desc 1024 (min 64 max 4096 align 32)
tx: queues 3 (max 256), desc 1024 (min 64 max 4096 align 32)
pci: device 8086:154c subsystem 1028: address :17:0e.01 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter jumbo-frame scatter rss-hash
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: udp-cksum tcp-cksum multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other ipv4
   ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other ipv6
rss active:none
tx burst function: iavf_xmit_pkts
rx burst function: iavf_recv_scattered_pkts_vec_avx2

The VPP config is (there is a veth-pair configured on the host):

create host-interface name Vpp2Host
set interface state host-Vpp2Host up

ip table add 4093
create sub-interfaces host-Vpp2Host 4093
set interface state host-Vpp2Host.4093 up
set interface ip table host-Vpp2Host.4093 4093
set interface ip address host-Vpp2Host.4093 198.19.255.249/29

set interface state NCIC-1-v1 up
ip table add 1
create sub-interfaces NCIC-1-v1 1
set interface state NCIC-1-v1.1 up
set interface ip table NCIC-1-v1.1 1
set interface ip address NCIC-1-v1.1 10.10.203.3/29
ip route add 198.19.255.248/29 table 1 via 198.19.255.249 next-hop-table 4093

When a packet is received (curl -k -vv https://198.19.255.253:8443/... ) we see 
the following trace on dpdk-input:
  NCIC-1-v1 rx queue 0
  buffer 0x14296: current data 0, length 78, buffer-pool 0, ref-count 1, 
totlen-nifb 0, trace handle 0x116
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 2, nb_segs 1, pkt_len 78
buf_len 2176, data_len 78, ol_flags 0x180, data_off 128, phys_addr 0x50a600
packet_type 0x191 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without 
extension headers
  RTE_PTYPE_L4_TCP (0x0100) TCP packet
  IP4: 86:ca:f3:a1:20:fc -> 1e:a0:ab:00:2a:ea 802.1q vlan 1
  TCP: 172.17.41.8 -> 198.19.255.253
tos 0x00, ttl 64, length 60, checksum 0x7bc3 dscp CS0 ecn NON_ECN
fragment id 0x23ce, flags DONT_FRAGMENT
  TCP: 41834 -> 8443
seq. 0xbd8844b4 ack 0x
flags 0x02 SYN, tcp header: 40 bytes
window 29200, checksum 0xa704
18:41:55:850222: ethernet-input
  frame: flags 0x3, hw-if-index 3, sw-if-index 3
  IP4: 86:ca:f3:a1:20:fc -> 1e:a0:ab:00:2a:ea 802.1q vlan 1
18:41:55:850224: ip4-input
  TCP: 172.17.41.8 -> 198.19.255.253
tos 0x00, ttl 64, length 60, checksum 0x7bc3 dscp CS0 ecn NON_ECN
fragment id 0x23ce, flags DONT_FRAGMENT
  TCP: 41834 -> 8443
seq. 0xbd8844b4 ack 0x
flags 0x02 SYN, tcp header: 40 bytes
window 29200, checksum 0xa704
18:41:55:850225: ip4-lookup
  fib 4 dpo-idx 57 flow hash: 0x
  TCP: 172.17.41.8 -> 198.19.255.253
tos 0x00, ttl 64, length 60, checksum 0x7bc3 dscp CS0 ecn NON_ECN
fragment id 0x23ce, flags DONT_FRAGMENT
  TCP: 41834 -> 8443
seq. 0xbd8844b4 ack 0x
flags 0x02 SYN, tcp header: 40 bytes
window 29200, checksum 0xa704
18:41:55:850226: ip4-load-balance
  fib 4 dpo-idx 18 flow hash: 0x
  TCP: 172.17.41.8 -> 

Re: [vpp-dev] VRRP issue when using interface in a table

2021-06-30 Thread Mechthild Buescher via lists.fd.io
Hi Neale,

Thanks for your reply. The bugfix partly solved the issue - VRRP goes into 
master/backup and keeps stable for a while. Unfortunately, it changes back to 
master/master after some time (15 minutes - 1 hour). We are currently trying to 
get more details and will come back to you.

But thanks for your support so far,

BR/Mechthild

From: Neale Ranns 
Sent: Thursday, 24 June 2021 12:33
To: Mechthild Buescher ; vpp-dev@lists.fd.io
Subject: Re: VRRP issue when using interface in a table

Hi Mechthild,

You'll need to include:
  
https://gerrit.fd.io/r/c/vpp/+/32298

/neale

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> on behalf of Mechthild 
Buescher via lists.fd.io 
mailto:mechthild.buescher=ericsson@lists.fd.io>>
Date: Thursday, 24 June 2021 at 10:49
To: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] VRRP issue when using interface in a table
Hi all,

we are using VPP on two nodes where we would like to run VRRP. This works fine 
if the VRRP VR interface is in fib 0 but if we but the interface into FIB table 
1 instead, VRRP is not working correctly anymore. Can you please help?

Our setup:

* 2 nodes with VPP on each node and one DPDK interface (we reduced the 
config to isolate the issue) connected to each VPP

* a switch between the nodes which just forwards the traffic, so that 
it's like a peer-2-peer connection

The VPP version is (both nodes):

vpp# show version
vpp v21.01.0-6~gf70123b2c built by suse on SUSE at 2021-05-06T12:18:31
vpp# show version verbose
Version:  v21.01.0-6~gf70123b2c
Compiled by:  suse
Compile host: SUSE
Compile date: 2021-05-06T12:18:31
Compile location: /root/vpp-sp/vpp
Compiler: GCC 7.5.0
Current PID:  6677

The VPP config uses the DPDK interface (both nodes):

vpp# show hardware-interfaces
  NameIdx   Link  Hardware
Ext-0  1 up   Ext-0
  Link speed: 10 Gbps
  Ethernet address e4:43:4b:ed:59:10
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum
Devargs:
rx: queues 1 (max 192), desc 1024 (min 64 max 4096 align 32)
tx: queues 3 (max 192), desc 1024 (min 64 max 4096 align 32)
pci: device 8086:1572 subsystem 1028:1f9c address :17:00.00 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
   scatter keep-crc rss-hash
rx offload active: ipv4-cksum jumbo-frame scatter
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: udp-cksum tcp-cksum multi-segs
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:none
tx burst mode: Scalar
rx burst mode: Vector AVX2 Scattered

The VRRP configs are (MASTER):

set interface state Ext-0 up
set interface ip address Ext-0 192.168.61.52/25
vrrp vr add Ext-0 vr_id 61 priority 200 no_preempt accept_mode 192.168.61.50

and on the system under test (SUT):

ip table add 1
set interface ip table Ext-0 1
set interface state Ext-0 up
set interface ip address Ext-0 192.168.61.51/25
vrrp vr add Ext-0 vr_id 61 priority 100 no_preempt accept_mode 192.168.61.50

On the MASTER, we started VRRP with:
vrrp proto start Ext-0 vr_id 61

so that it has:
vpp# show vrrp vr
[0] sw_if_index 1 VR ID 61 IPv4
   state Master flags: preempt no accept yes unicast no
   priority: configured 200 adjusted 200
   timers: adv interval 100 master adv 100 skew 21 master down 321
   virtual MAC 00:00:5e:00:01:3d
   addresses 192.168.61.50
   peer addresses
   tracked interfaces

On the SUT, we did not yet start VRRP, so we see:
vpp# show vrrp vr
[0] sw_if_index 1 VR ID 61 IPv4
   state Initialize flags: preempt no accept yes unicast no
   priority: configured 100 adjusted 100
   timers: adv interval 100 master adv 0 skew 0 master down 0
   virtual MAC 00:00:5e:00:01:3d
   addresses 192.168.61.50
   peer addresses
   tracked interfaces

Here I see already that something is going wrong as the VRRP packets are not 
reaching vrrp4-input:
vpp# show errors
   Count  Node  Reason   

Re: [vpp-dev] why vlib_buffer_push_ip4 not support fragmentation and set df flag for tcp?

2021-06-30 Thread Florin Coras
Fragmentation is expensive. Therefore, because tcp originates the packets 
locally, we do not want it to exceed the interface’s mtu. If you want to force 
larger bursts from tcp, try enabling tso if the egress interface supports it. 

Any particular reason why you’d like tcp to use such a large mss? 

Regards,
Florin

> On Jun 30, 2021, at 2:23 AM, jiangxiaom...@outlook.com wrote:
> 
> Hi guys,
> In my project, I set tcp mtu 9000 in vpp startup file: tcp { mtu 9000 }, 
> and set interfaces mtu 1500. When sending tcp packet, vlib_buffer_push_ip4 
> will add 'dont frag' flag for packet, if tcp data more then 1500 bytes, i 
> will "ip4 MTU exceeded and DF set" error.
> 
> always_inline void *
> vlib_buffer_push_ip4 (vlib_main_t * vm, vlib_buffer_t * b,
>ip4_address_t * src, ip4_address_t * dst, int proto,
>u8 csum_offload)
> {
>   return vlib_buffer_push_ip4_custom (vm, b, src, dst, proto, csum_offload,
>1 /* is_df */ ); <- why add df flag ?
> }
> 
> if seting is_df flag to 0 in vlib_buffer_push_ip4, sending tcp data more then 
> 1500 bytes will succeed.
> 
> Is there anyone know why vlib_buffer_push_ip4 not support fragmentation and 
> should set df flag for tcp?
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19657): https://lists.fd.io/g/vpp-dev/message/19657
Mute This Topic: https://lists.fd.io/mt/83888434/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] heap sizes

2021-06-30 Thread Matthew Smith via lists.fd.io
On Wed, Jun 30, 2021 at 3:01 AM Benoit Ganne (bganne) 
wrote:

> > What I'm trying to figure out is this: do I need to try and determine a
> > formula for the sizes that should be used for main heap and stat segment
> > based on X number of routes and Y number of worker threads? Or is there a
> > downside to just setting the main heap size to 32G (which seems like a
> > number that is unlikely to ever be exhausted sans memory leaks)?
>
> I do not think it would be a good idea:
>  - it depends upon overcommit configuration:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/vm/overcommit-accounting.rst
>  - under the default overcommit setting ("heuristic") this would prevent
> small configs to run VPP by default: think developer VMs or smaller cloud
> instances (eg. AWS C5n.large are 4GB) which are pretty common
>
> Maybe having an (optional) dyncamically growing heap could be a better
> option?
>
> ben
>

Hi Ben,

Ah, thanks for the pointer!

Yes, allowing dynamic heap growth sounds like it could be better.
Alternatively... if memory allocations could fail and something more
graceful than VPP exiting could occur, that may also be better. E.g. if I'm
adding a route and try to allocate a counter for it and that fails, it
would be better to refuse to add the route than to exit and take the
network down.

I realize that neither of those options is easy to do btw. I'm just trying
to figure out how to make it easier and more forgiving for users to set up
their configuration without making them learn about various memory
parameters.

Thanks,
-Matt

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19656): https://lists.fd.io/g/vpp-dev/message/19656
Mute This Topic: https://lists.fd.io/mt/83856384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Issue in VPP v21.06 compilation

2021-06-30 Thread Chinmaya Aggarwal
Hi,

We are trying to compile VPP v21.06 from stable/2106 branch with MLX5 support. 
We enabled MLX5 support in vpp by doing below changes : -

vi build/external/packages/dpdk.mk

DPDK_MLX5_PMD                ?= y
DPDK_MLX5_COMMON_PMD         ?= y

We then executed

# make install-dep

This executes successfully. But, on executing "make install-ext-deps". We see 
below error: -

[root@localhost vpp]# make install-ext-deps
make -C build/external install-rpm
make[1]: Entering directory '/opt/vpp/build/external'
make[2]: Entering directory '/opt/vpp/build/external'
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.TMHM5j
+ umask 022
+ cd /opt/vpp/build/external/rpm/BUILD
+ '[' /opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64 '!=' / 
']'
+ rm -rf /opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64
++ dirname /opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64
+ mkdir -p /opt/vpp/build/external/rpm/BUILDROOT
+ mkdir /opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64
+ make -C ../.. BUILD_DIR=/opt/vpp/build/external/rpm/tmp 
INSTALL_DIR=/opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64/opt/vpp/external/x86_64
 config
make[3]: Entering directory '/opt/vpp/build/external'
mkdir -p downloads
Downloading 
https://ftp.osuosl.org/pub/blfs/conglomeration/nasm/nasm-2.14.02.tar.xz
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
100  808k  100  808k    0     0  91258      0  0:00:09  0:00:09 --:--:--  127k
--- validating nasm 2.14.02 checksum
--- extracting nasm 2.14.02
--- patching nasm 2.14.02
--- configuring nasm 2.14.02 - log: 
/opt/vpp/build/external/rpm/tmp/nasm.config.log
mkdir -p downloads
Downloading http://github.com/01org/intel-ipsec-mb/archive/v1.0.tar.gz
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   125  100   125    0     0    141      0 --:--:-- --:--:-- --:--:--   141
100   126  100   126    0     0    100      0  0:00:01  0:00:01 --:--:--  123k
100 1094k    0 1094k    0     0   416k      0 --:--:--  0:00:02 --:--:-- 2070k
--- validating ipsec-mb 1.0 checksum
--- extracting ipsec-mb 1.0
--- patching ipsec-mb 1.0
--- building nasm 2.14.02 - log: /opt/vpp/build/external/rpm/tmp/nasm.build.log
Can't open perl script "tools/mkdep.pl": No such file or directory
--- installing nasm 2.14.02 - log: 
/opt/vpp/build/external/rpm/tmp/nasm.install.log
Can't open perl script "tools/mkdep.pl": No such file or directory
--- configuring ipsec-mb 1.0 - log: 
/opt/vpp/build/external/rpm/tmp/ipsec-mb.config.log
mkdir -p downloads
Downloading http://fast.dpdk.org/rel/dpdk-21.02.tar.xz
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
100  1720  100  1720    0     0   2118      0 --:--:-- --:--:-- --:--:--  2118
--- validating dpdk 21.02 checksum
==
Bad Checksum!
Expected:   2c3e4800b04495ad7fa8656a7e1a3ec1
Calculated: 18aa4b211fa5578b201fdc6197bb8ae8
Please remove downloads/dpdk-21.02.tar.xz and retry
==
make[3]: *** [packages/dpdk.mk:203: 
/opt/vpp/build/external/rpm/tmp/.dpdk.download.ok] Error 1
make[3]: Leaving directory '/opt/vpp/build/external'
error: Bad exit status from /var/tmp/rpm-tmp.TMHM5j (%install)

RPM build errors:
Bad exit status from /var/tmp/rpm-tmp.TMHM5j (%install)
make[2]: *** [Makefile:114: vpp-ext-deps-21.06-10.x86_64.rpm] Error 1
make[2]: Leaving directory '/opt/vpp/build/external'
make[1]: *** [Makefile:126: install-rpm] Error 2
make[1]: Leaving directory '/opt/vpp/build/external'
make: *** [Makefile:596: install-ext-deps] Error 2

We found that dpdk-21.02.tar.xz file in /opt/vpp/build/external/downloads/ is 
somehow corrupted. We then downloaded this file manually and replaced it at 
this location. On executing "make install-ext-deps" we now see a different 
error : -

[root@localhost vpp]# make install-ext-deps
make -C build/external install-rpm
make[1]: Entering directory '/opt/vpp/build/external'
make[2]: Entering directory '/opt/vpp/build/external'
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.uu9cu1
+ umask 022
+ cd /opt/vpp/build/external/rpm/BUILD
+ '[' /opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64 '!=' / 
']'
+ rm -rf /opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64
++ dirname /opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64
+ mkdir -p /opt/vpp/build/external/rpm/BUILDROOT
+ mkdir /opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64
+ make -C ../.. BUILD_DIR=/opt/vpp/build/external/rpm/tmp 
INSTALL_DIR=/opt/vpp/build/external/rpm/BUILDROOT/vpp-ext-deps-21.06-10.x86_64/opt/vpp/external/x86_64
 config
make[3]: Entering directory 

Re: [vpp-dev] VPP 2005 crash : How to get DPDK lib source code/symbol info in back-trace

2021-06-30 Thread Mrityunjay Kumar
You can build vpp in debug mode, and also add in startup.conf -> unix
section + full-core-dump***

MJ

On Wed, 30 Jun, 2021, 4:02 pm chetan bhasin, 
wrote:

> Hi,
>
> I am using VPP 2005 with DPDK enabled. If the application crashes under
> dpdk library api , We won't be getting proper symbols information in the
> frame of rte_api using gdb.
>
> Will dynamically linking dpdk lib resolve this ?
> Any other suggestions?
>
> Thanks,
> Chetan
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19654): https://lists.fd.io/g/vpp-dev/message/19654
Mute This Topic: https://lists.fd.io/mt/83889120/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP 2005 crash : How to get DPDK lib source code/symbol info in back-trace

2021-06-30 Thread chetan bhasin
Hi,

I am using VPP 2005 with DPDK enabled. If the application crashes under
dpdk library api , We won't be getting proper symbols information in the
frame of rte_api using gdb.

Will dynamically linking dpdk lib resolve this ?
Any other suggestions?

Thanks,
Chetan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19653): https://lists.fd.io/g/vpp-dev/message/19653
Mute This Topic: https://lists.fd.io/mt/83889120/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] why vlib_buffer_push_ip4 not support fragmentation and set df flag for tcp?

2021-06-30 Thread jiangxiaoming
Hi guys,
In my project, I set tcp mtu 9000 in vpp startup file: tcp { mtu 9000 }, and 
set interfaces mtu 1500. When sending tcp packet, vlib_buffer_push_ip4 will add 
'dont frag' flag for packet, if tcp data more then 1500 bytes, i will "ip4 MTU 
exceeded and DF set" error.

always_inline void *
vlib_buffer_push_ip4 (vlib_main_t * vm, vlib_buffer_t * b,
ip4_address_t * src, ip4_address_t * dst, int proto,
u8 csum_offload)
{
return vlib_buffer_push_ip4_custom (vm, b, src, dst, proto, csum_offload,
1 /* is_df */ ); <- why add df flag ?
}

if seting is_df flag to 0 in vlib_buffer_push_ip4, sending tcp data more then 
1500 bytes will succeed.

Is there anyone know why vlib_buffer_push_ip4 not support fragmentation and 
should set df flag for tcp?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19652): https://lists.fd.io/g/vpp-dev/message/19652
Mute This Topic: https://lists.fd.io/mt/83888434/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] heap sizes

2021-06-30 Thread Benoit Ganne (bganne) via lists.fd.io
> What I'm trying to figure out is this: do I need to try and determine a
> formula for the sizes that should be used for main heap and stat segment
> based on X number of routes and Y number of worker threads? Or is there a
> downside to just setting the main heap size to 32G (which seems like a
> number that is unlikely to ever be exhausted sans memory leaks)?

I do not think it would be a good idea:
 - it depends upon overcommit configuration: 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/vm/overcommit-accounting.rst
 - under the default overcommit setting ("heuristic") this would prevent small 
configs to run VPP by default: think developer VMs or smaller cloud instances 
(eg. AWS C5n.large are 4GB) which are pretty common

Maybe having an (optional) dyncamically growing heap could be a better option?

ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19651): https://lists.fd.io/g/vpp-dev/message/19651
Mute This Topic: https://lists.fd.io/mt/83856384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-