Dear VPP developers
I would like to run VPP on the Bluefield-2 smartNIC, but even though I
managed to compile it the interface doesn't show up inside the CLI. By
any chance, would you know how to compile and configure vpp for this device?
I am using VPP v21.06-rc2 and did the following
Congratulations to the FD.io community and all who contributed to yet
another on time VPP release!
Special thanks to Andrew for his work in automating and streamlining the
release process.
Thanks,
-daw-
On 6/30/2021 3:18 PM, Andrew Yourtchenko wrote:
Hi all,
VPP release 21.06 is complete
Agreed on doing the upgrade to newer code base being prudent and yes, we are in
the midst of doing so. However, I did not see any obvious changes in this area,
so I am a bit pessimistic on upgrade being the fix. Perhaps I missed a subtle
improvement in this area folks could point me at to ease
Hi all,
VPP release 21.06 is complete and is available from the usual
packagecloud.io/fdio/release location!
I have verified using the scripts [0] that the new release installs
and runs on the Centos8, Debian 10 (Buster) as well as Ubuntu 18.04
and 20.04.
Special shout-out goes to pnat plugin
Hi Ben,
Thanks for your fast reply. Here is the requested output (I skipped config for
other interfaces and VLANs)
vppctl show int addr
NCIC-1-v1 (up):
NCIC-1-v1.1 (up):
L3 10.10.203.1/29 ip4 table-id 1 fib-idx 4
host-Vpp2Host (up):
host-Vpp2Host.4093 (up):
L3 198.19.255.249/29 ip4 table-id
Hi Mechthild,
What Benoit said about punting. You might also find this useful:
https://github.com/FDio/vpp/blob/master/src/plugins/linux-cp/FEATURE.yaml
plus inline …
From: vpp-dev@lists.fd.io on behalf of Mechthild Buescher
via lists.fd.io
Date: Wednesday, 30 June 2021 at 18:06
To:
>From the trace output, it looks like VPP thinks 198.19.255.253 is one of its
>interface address, and hence try to deliver it locally. As there is no
>configured listener for TCP packets, it default to punting, and as there is no
>punt rule it drops.
Can you share the output of 'show int addr'
Hi all,
we are using VPP with several FIB tables and when we use 'next-hop-table' the
ip4-lookup results somehow in 'unknown ip protocol'. Can you please help?
Our setup:
* 1 (out of 2) with VPP and a DPDK interface
The VPP version is (both nodes):
vpp# show version verbose
Version:
Hi Neale,
Thanks for your reply. The bugfix partly solved the issue - VRRP goes into
master/backup and keeps stable for a while. Unfortunately, it changes back to
master/master after some time (15 minutes - 1 hour). We are currently trying to
get more details and will come back to you.
But
Fragmentation is expensive. Therefore, because tcp originates the packets
locally, we do not want it to exceed the interface’s mtu. If you want to force
larger bursts from tcp, try enabling tso if the egress interface supports it.
Any particular reason why you’d like tcp to use such a large
On Wed, Jun 30, 2021 at 3:01 AM Benoit Ganne (bganne)
wrote:
> > What I'm trying to figure out is this: do I need to try and determine a
> > formula for the sizes that should be used for main heap and stat segment
> > based on X number of routes and Y number of worker threads? Or is there a
> >
Hi,
We are trying to compile VPP v21.06 from stable/2106 branch with MLX5 support.
We enabled MLX5 support in vpp by doing below changes : -
vi build/external/packages/dpdk.mk
DPDK_MLX5_PMD ?= y
DPDK_MLX5_COMMON_PMD ?= y
We then executed
# make install-dep
This
You can build vpp in debug mode, and also add in startup.conf -> unix
section + full-core-dump***
MJ
On Wed, 30 Jun, 2021, 4:02 pm chetan bhasin,
wrote:
> Hi,
>
> I am using VPP 2005 with DPDK enabled. If the application crashes under
> dpdk library api , We won't be getting proper symbols
Hi,
I am using VPP 2005 with DPDK enabled. If the application crashes under
dpdk library api , We won't be getting proper symbols information in the
frame of rte_api using gdb.
Will dynamically linking dpdk lib resolve this ?
Any other suggestions?
Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links:
Hi guys,
In my project, I set tcp mtu 9000 in vpp startup file: tcp { mtu 9000 }, and
set interfaces mtu 1500. When sending tcp packet, vlib_buffer_push_ip4 will add
'dont frag' flag for packet, if tcp data more then 1500 bytes, i will "ip4 MTU
exceeded and DF set" error.
always_inline void *
> What I'm trying to figure out is this: do I need to try and determine a
> formula for the sizes that should be used for main heap and stat segment
> based on X number of routes and Y number of worker threads? Or is there a
> downside to just setting the main heap size to 32G (which seems like a
16 matches
Mail list logo