I installed vpp-plugin-core. No more crash was seen.
However, when I reserved 10 cores for 2 nics. Some cores are not polling
shown by htop.
cpu {
main-core 2
corelist-workers 4,6,8,10,12,14,16,18,20,22
}
dpdk {
socket-mem 2048,0
log-level debug
no-tx-checksum-offload
dev default{
num-tx-desc 512
num-rx-desc 512
}
dev 0000:1a:00.0 {
workers 4,6,8,10,12
name eth0
}
dev 0000:19:00.1 {
workers 14,16,18,20,22
name eth1
}
vpp shows only core allocation is not the same as what is specified in
startup conf file.
vpp# sh threads
ID Name Type LWP Sched Policy (Priority)
lcore Core Socket State
0 vpp_main 7099 other (0) 2
0 0
1 vpp_wk_0 workers 7101 other (0) 4
1 0
2 vpp_wk_1 workers 7102 other (0) 6
4 0
3 vpp_wk_2 workers 7103 other (0) 8
2 0
4 vpp_wk_3 workers 7104 other (0) 10
3 0
5 vpp_wk_4 workers 7105 other (0) 12
8 0
6 vpp_wk_5 workers 7106 other (0) 14
13 0
7 vpp_wk_6 workers 7107 other (0) 16
9 0
*8 vpp_wk_7 workers 7108 other (0) 18
12 0 <== not polling*
9 vpp_wk_8 workers 7109 other (0) 20
10 0
*10 vpp_wk_9 workers 7110 other (0) 22
11 0 <== not polling*
vpp# sh interface rx-placement
Thread 1 (vpp_wk_0):
node dpdk-input:
eth1 queue 0 (polling)
Thread 2 (vpp_wk_1):
node dpdk-input:
eth1 queue 1 (polling)
Thread 3 (vpp_wk_2):
node dpdk-input:
eth1 queue 2 (polling)
Thread 4 (vpp_wk_3):
node dpdk-input:
eth1 queue 3 (polling)
*Thread 5 (vpp_wk_4): <=== Why is this thread handling two queues?
node dpdk-input: eth1 queue 4 (polling) eth0 queue 0 (polling)*
Thread 6 (vpp_wk_5):
node dpdk-input:
eth0 queue 3 (polling)
*Thread 7 (vpp_wk_6): <=== Same here. node dpdk-input:*
* eth0 queue 1 (polling) eth0 queue 4 (polling)*
Thread 9 (vpp_wk_8):
node dpdk-input:
eth0 queue 2 (polling)
vpp#
htop shows core 18 and 22 are not polling. They correspond to worker 7 and
9, which are missing in the "sh int rx-replacement" output.
Any clue why? Or, there is probably more misconfig here? Or bug?
On Wed, Nov 6, 2019 at 9:57 AM Chuan Han <[email protected]> wrote:
> Yes. You got it!
>
> There are two machines I am testing.
>
> The crashed one does not have any crypto engine. The other one did not
> crash, but core allocation is messed up, e.g., some core is not polling. I
> am not sure why. I attached two machines debug output.
>
> The weird thing is that when I reduced the number of workers everything
> worked fine. I did send 8.5Gbps udp/tcp traffic over the two machines. I
> also saw encryption/decryption happening. How could this be possible
> without crypto engine?
>
>
> On Wed, Nov 6, 2019 at 4:30 AM Dave Barach (dbarach) <[email protected]>
> wrote:
>
>> "show plugin" might come in handy...
>>
>> -----Original Message-----
>> From: [email protected] <[email protected]> On Behalf Of Benoit
>> Ganne (bganne) via Lists.Fd.Io
>> Sent: Wednesday, November 6, 2019 3:42 AM
>> To: Chuan Han <[email protected]>; Arivudainambi Appachi gounder <
>> [email protected]>; Jerry Cen <[email protected]>
>> Cc: [email protected]
>> Subject: Re: [vpp-dev] Is there a limit when assigning corelist-workers
>> in vpp?
>>
>> Hi Chuan,
>>
>> Thanks for the quality info!
>> My best guess is you did not installed the package vpp-plugin-core which
>> contains crypto plugins.
>>
>> The issue is that you do not have any crypto handler registered to
>> process crypto. You can check what is available with the commands "show
>> crypto engines" and "show crypto handlers".
>>
>> #5 0x00007feb3e571d8d in vnet_crypto_is_set_handler
>> (alg=VNET_CRYPTO_ALG_AES_256_GCM) at
>> /w/workspace/vpp-beta-merge-1908-ubuntu1804/src/vnet/crypto/crypto.c:137
>> 137 return (NULL != cm->ops_handlers[alg]);
>> >>> list
>> 132 int
>> 133 vnet_crypto_is_set_handler (vnet_crypto_alg_t alg)
>> 134 {
>> 135 vnet_crypto_main_t *cm = &crypto_main;
>> 136
>> 137 return (NULL != cm->ops_handlers[alg]);
>> 138 }
>> 139
>> 140 void
>> 141 vnet_crypto_register_ops_handler (vlib_main_t * vm, u32
>> engine_index,
>> >>> p cm->ops_handlers
>> $1 = (vnet_crypto_ops_handler_t **) 0x0
>>
>> Best
>> ben
>>
>> > -----Original Message-----
>> > From: Chuan Han <[email protected]>
>> > Sent: mardi 5 novembre 2019 20:04
>> > To: Benoit Ganne (bganne) <[email protected]>; Arivudainambi Appachi
>> > gounder <[email protected]>; Jerry Cen <[email protected]>
>> > Cc: [email protected]
>> > Subject: Re: [vpp-dev] Is there a limit when assigning
>> > corelist-workers in vpp?
>> >
>> > Thanks Ben for looking into this.
>> >
>> > Yes. It seems symbols are mixed with some other code files.
>> >
>> > https://github.com/river8/gnxi/tree/master/perf_testing/r740_2/vpp/cor
>> > es
>> >
>> >
>> > The above is the compressed core file.
>> >
>> > It is not a private build. It is installed using this command: curl -s
>> > https://packagecloud.io/install/repositories/fdio/release/script.deb.s
>> > h | sudo bash
>> >
>> > vpp# sh dpdk version
>> > DPDK Version: DPDK 19.05.0
>> > DPDK EAL init args: -c 4 -n 4 --in-memory --log-level debug --vdev
>> > crypto_aesni_mb0,socket_id=0 --file-prefix vpp -w 0000:1a:00.0 -w
>> > 0000:19:00.1 --master-lcore 2
>> > vpp# sh ver
>> > vpp v19.08.1-release built by root on 7df7f6f22869 at Wed Sep 18
>> > 18:12:02 UTC 2019 vpp#
>> >
>> >
>> > os version:
>> >
>> > root@esdn-lab2:~/gnxi/perf_testing/r740_2/vpp/cores# uname -a Linux
>> > esdn-lab2 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC
>> > 2019 x86_64 x86_64 x86_64 GNU/Linux
>> > root@esdn-lab2:~/gnxi/perf_testing/r740_2/vpp/cores# cat
>> > /etc/lsb-release DISTRIB_ID=Ubuntu
>> > DISTRIB_RELEASE=18.04
>> > DISTRIB_CODENAME=bionic
>> > DISTRIB_DESCRIPTION="Ubuntu 18.04.3 LTS"
>> > root@esdn-lab2:~/gnxi/perf_testing/r740_2/vpp/cores#
>> >
>> >
>> > On Tue, Nov 5, 2019 at 1:16 AM Benoit Ganne (bganne) <[email protected]
>> > <mailto:[email protected]> > wrote:
>> >
>> >
>> > Hi Chuan,
>> >
>> > > I downloaded the glibc source file. Now, I can see the symbols.
>> > See
>> > > attachment for all details. Hope they help fix the issue.
>> >
>> > Glancing through the backtrace and code, I think the backtrace
>> info
>> > are seriously wrong.
>> > Can you share the packages (deb/rpm) you use alongside the
>> > (compressed) corefile, as described here:
>> > https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportin
>> > giss
>> > ues.html#core-files
>> >
>> > Thanks
>> > ben
>> >
>> >
>> > > On Mon, Nov 4, 2019 at 11:00 AM Chuan Han <[email protected]
>> > <mailto:[email protected]>
>> > > <mailto:[email protected] <mailto:[email protected]> > >
>> > wrote:
>> > >
>> > >
>> > > More details.
>> > >
>> > > warning: core file may not match specified executable
>> file.
>> > > [New LWP 10851]
>> > > [New LWP 10855]
>> > > [New LWP 10853]
>> > > [New LWP 10854]
>> > > [New LWP 10856]
>> > > [New LWP 10857]
>> > > [New LWP 10858]
>> > > [New LWP 10852]
>> > > [New LWP 10859]
>> > > [New LWP 10860]
>> > > [New LWP 10861]
>> > > [Thread debugging using libthread_db enabled]
>> > > Using host libthread_db library "/lib/x86_64-linux-
>> > > gnu/libthread_db.so.1".
>> > > Core was generated by `vpp -c vpp_startup/startup.conf'.
>> > > Program terminated with signal SIGABRT, Aborted.
>> > > #0 __GI_raise (sig=sig@entry=6) at
>> > > ../sysdeps/unix/sysv/linux/raise.c:51
>> > > 51 ../sysdeps/unix/sysv/linux/raise.c: No such file
>> or
>> > > directory.
>> > > [Current thread is 1 (Thread 0x7f8120c00780 (LWP 10851))]
>> > > (gdb)
>> > >
>> > >
>> > > On Mon, Nov 4, 2019 at 10:46 AM Chuan Han
>> > <[email protected] <mailto:[email protected]>
>> > > <mailto:[email protected] <mailto:[email protected]> > >
>> > wrote:
>> > >
>> > >
>> > > Not sure how to get all symbols. Can someone share
>> > some steps?
>> > >
>> > > (gdb) list
>> > > 46 in ../sysdeps/unix/sysv/linux/raise.c
>> > > (gdb) bt full
>> > > #0 __GI_raise (sig=sig@entry=6) at
>> > > ../sysdeps/unix/sysv/linux/raise.c:51
>> > > set = {__val = {1024, 140192549297216,
>> > > 140191458925760, 94479452800528, 140191458925744,
>> > 14738652586040953856,
>> > > 12, 94479452800528, 0, 14738652586040953856, 12, 1,
>> 94479452800528,
>> > 1, 12,
>> > > 140192554943764}}
>> > > pid = <optimized out>
>> > > tid = <optimized out>
>> > > ret = <optimized out>
>> > > #1 0x00007f811edf2801 in __GI_abort () at
>> > abort.c:79
>> > > save_stage = 1
>> > > act = {__sigaction_handler = {sa_handler =
>> > 0x0,
>> > > sa_sigaction = 0x0}, sa_mask = {__val = {140192549484596,
>> > 140192549745376,
>> > > 140192549484596, 140192554638615, 140192554894358,
>> 140192549484596,
>> > > 140191451682880, 4294967295, 4, 1024, 140191451683548,
>> > 140191451691708,
>> > > 14738652586040953856,
>> > > 140191436101232, 8, 8}}, sa_flags =
>> > 853347328,
>> > > sa_restorer = 0x7f80de1c1ef0}
>> > > sigs = {__val = {32, 0 <repeats 15
>> times>}}
>> > > __cnt = <optimized out>
>> > > __set = <optimized out>
>> > > __cnt = <optimized out>
>> > > __set = <optimized out>
>> > > #2 0x000055edb4f0ad44 in os_exit ()
>> > > No symbol table info available.
>> > > #3 0x00007f811f6f53d9 in ?? () from
>> > /usr/lib/x86_64-linux-
>> > > gnu/libvlib.so.19.08.1
>> > > No symbol table info available.
>> > > #4 <signal handler called>
>> > > No locals.
>> > > #5 0x00007f811f6db793 in ?? () from
>> > /usr/lib/x86_64-linux-
>> > > gnu/libvlib.so.19.08.1
>> > > No symbol table info available.
>> > > #6 0x00007f811f6df4d9 in ?? () from
>> > /usr/lib/x86_64-linux-
>> > > gnu/libvlib.so.19.08.1
>> > > No symbol table info available.
>> > > #7 0x00007f811f6a4eee in
>> > vlib_call_init_exit_functions ()
>> > > from /usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
>> > > No symbol table info available.
>> > > #8 0x00007f811f6b5d17 in vlib_main () from
>> > /usr/lib/x86_64-
>> > > linux-gnu/libvlib.so.19.08.1
>> > > No symbol table info available.
>> > > #9 0x00007f811f6f4416 in ?? () from
>> > /usr/lib/x86_64-linux-
>> > > gnu/libvlib.so.19.08.1
>> > > No symbol table info available.
>> > > #10 0x00007f811f1cb834 in clib_calljmp () from
>> > > /usr/lib/x86_64-linux-gnu/libvppinfra.so.19.08.1
>> > > No symbol table info available.
>> > > #11 0x00007ffe3cb01770 in ?? ()
>> > > No symbol table info available.
>> > > #12 0x00007f811f6f586f in vlib_unix_main () from
>> > > /usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
>> > >
>> > >
>> > > On Mon, Nov 4, 2019 at 10:45 AM Chuan Han via
>> > Lists.Fd.Io <http://Lists.Fd.Io>
>> > > <http://Lists.Fd.Io> <[email protected]
>> > <mailto:[email protected]>
>> > > <mailto:[email protected] <mailto:[email protected]>
>> > >
>> > wrote:
>> > >
>> > >
>> > > Here is the gdb vpp core output
>> > >
>> > > (gdb) f
>> > > #0 __GI_raise (sig=sig@entry=6) at
>> > > ../sysdeps/unix/sysv/linux/raise.c:51
>> > > 51 in
>> > ../sysdeps/unix/sysv/linux/raise.c
>> > > (gdb) bt
>> > > #0 __GI_raise (sig=sig@entry=6) at
>> > > ../sysdeps/unix/sysv/linux/raise.c:51
>> > > #1 0x00007f811edf2801 in __GI_abort () at
>> > abort.c:79
>> > > #2 0x000055edb4f0ad44 in os_exit ()
>> > > #3 0x00007f811f6f53d9 in ?? () from
>> > /usr/lib/x86_64-
>> > > linux-gnu/libvlib.so.19.08.1
>> > > #4 <signal handler called>
>> > > #5 0x00007f811f6db793 in ?? () from
>> > /usr/lib/x86_64-
>> > > linux-gnu/libvlib.so.19.08.1
>> > > #6 0x00007f811f6df4d9 in ?? () from
>> > /usr/lib/x86_64-
>> > > linux-gnu/libvlib.so.19.08.1
>> > > #7 0x00007f811f6a4eee in
>> > vlib_call_init_exit_functions
>> > > () from /usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
>> > > #8 0x00007f811f6b5d17 in vlib_main ()
>> from
>> > > /usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
>> > > #9 0x00007f811f6f4416 in ?? () from
>> > /usr/lib/x86_64-
>> > > linux-gnu/libvlib.so.19.08.1
>> > > #10 0x00007f811f1cb834 in clib_calljmp ()
>> > from
>> > > /usr/lib/x86_64-linux-gnu/libvppinfra.so.19.08.1
>> > > #11 0x00007ffe3cb01770 in ?? ()
>> > > #12 0x00007f811f6f586f in vlib_unix_main
>> ()
>> > from
>> > > /usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
>> > > #13 0x3de8f63100000400 in ?? ()
>> > > #14 0x8100458b49ffd49d in ?? ()
>> > > #15 0x8b4800000100f868 in ?? ()
>> > > #16 0x6d8141fffffb6085 in ?? ()
>> > > #17 0x408b480000010008 in ?? ()
>> > > #18 0x480000000000c740 in ?? ()
>> > >
>> > >
>> > > let me know more specific steps to
>> pinpoint
>> > the issues.
>> > >
>> > > On Mon, Nov 4, 2019 at 10:13 AM Damjan
>> > Marion
>> > > <[email protected] <mailto:[email protected]> <mailto:[email protected]
>> > <mailto:[email protected]> > > wrote:
>> > >
>> > >
>> > > i remember doing corelist-workers
>> > with > 50
>> > > cores…..
>> > >
>> > > If you paste traceback we may have
>> > better clue
>> > > what is wrong…
>> > >
>> > > —
>> > > Damjan
>> > >
>> > >
>> > >
>> > >
>> > > On 4 Nov 2019, at 18:51,
>> > Chuan Han via
>> > > Lists.Fd.Io <http://Lists.Fd.Io> <http://Lists.Fd.Io>
>> > <[email protected] <mailto:[email protected]>
>> > > <mailto:chuanhan <mailto:chuanhan> [email protected]
>> > <mailto:[email protected]> > > wrote:
>> > >
>> > > All even number cores are
>> on
>> > numa 0, which
>> > > also hosts all nics.
>> > >
>> > > It seems corelist-workers
>> > can only take
>> > > maximum 8 cores.
>> > >
>> > > On Mon, Nov 4, 2019 at
>> 9:45
>> > AM Tkachuk,
>> > > Georgii <[email protected]
>> > <mailto:[email protected]> <mailto:[email protected]
>> > <mailto:[email protected]> > >
>> > > wrote:
>> > >
>> > >
>> > > Hi Chuan, are cores 20
>> and
>> > 22 on socket0 or
>> > > socket1? If they are on socket1, the application is crashing
>> > because the
>> > > aesni_mb driver is pointing to socket0: vdev
>> > crypto_aesni_mb0,socket_id=0.
>> > >
>> > >
>> > >
>> > > George
>> > >
>> > >
>> > >
>> > > From: [email protected]
>> > <mailto:[email protected]> <mailto:vpp- <mailto:vpp->
>> > > [email protected] <mailto:[email protected]> > <
>> [email protected]
>> > <mailto:[email protected]> <mailto:[email protected] <mailto:vpp-
>> > [email protected]> > > On
>> > > Behalf Of Chuan Han via Lists.Fd.Io <http://Lists.Fd.Io>
>> > <http://lists.fd.io/>
>> > > Sent: Monday, November 04,
>> > 2019 10:27 AM
>> > > To: vpp-dev <vpp-
>> > [email protected] <mailto:[email protected]>
>> > > <mailto:[email protected] <mailto:[email protected]> > >
>> > > Cc: [email protected]
>> > <mailto:[email protected]> <mailto:vpp- <mailto:vpp->
>> > > [email protected] <mailto:[email protected]> >
>> > > Subject: [vpp-dev] Is
>> there
>> > a limit when
>> > > assigning corelist-workers in vpp?
>> > >
>> > >
>> > >
>> > > Hi, vpp experts,
>> > >
>> > >
>> > >
>> > > I am trying to allocate
>> more
>> > cores to a phy
>> > > nic. I want to allocate cores 4,6,8,10 to eth0, and cores
>> > 12,14,16,18 to
>> > > eth1.
>> > >
>> > >
>> > >
>> > > cpu {
>> > > main-core 2
>> > > # corelist-workers
>> > > 4,6,8,10,12,14,16,18,20,22 <== This does not work. vpp crashes
>> > when
>> > > starting.
>> > > corelist-workers
>> > 4,6,8,10,12,14,16,18
>> > > }
>> > >
>> > > dpdk {
>> > > socket-mem 2048,0
>> > > log-level debug
>> > > no-tx-checksum-offload
>> > > dev default{
>> > > num-tx-desc 512
>> > > num-rx-desc 512
>> > > }
>> > > dev 0000:1a:00.0 {
>> > > # workers 4,6,8,10,12
>> > > workers 4,6,8,10
>> > > name eth0
>> > > }
>> > > dev 0000:19:00.1 {
>> > > # workers
>> 14,16,18,20,22
>> > > workers 12,14,16,18
>> > > name eth1
>> > > }
>> > > # Use aesni mb lib.
>> > > vdev
>> > crypto_aesni_mb0,socket_id=0
>> > > # Use qat VF pcie
>> > addresses.
>> > > # dev 0000:3d:01.0
>> > > no-multi-seg
>> > > }
>> > >
>> > >
>> > >
>> > > Afte vpp starts, I can see
>> > eth1 got 4 cores
>> > > but eth0 only got 3 cores.
>> > >
>> > >
>> > >
>> > > vpp# sh thread
>> > > ID Name
>> > Type LWP
>> > > Sched Policy (Priority) lcore Core Socket State
>> > > 0 vpp_main
>> > 10653
>> > > other (0) 2 0 0
>> > > 1 vpp_wk_0
>> > workers 10655
>> > > other (0) 4 1 0
>> > > 2 vpp_wk_1
>> > workers 10656
>> > > other (0) 6 4 0
>> > > 3 vpp_wk_2
>> > workers 10657
>> > > other (0) 8 2 0
>> > > 4 vpp_wk_3
>> > workers 10658
>> > > other (0) 10 3 0
>> > > 5 vpp_wk_4
>> > workers 10659
>> > > other (0) 12 8 0
>> > > 6 vpp_wk_5
>> > workers 10660
>> > > other (0) 14 13 0
>> > > 7 vpp_wk_6
>> > workers 10661
>> > > other (0) 16 9 0
>> > > 8 vpp_wk_7
>> > workers 10662
>> > > other (0) 18 12 0 <=== core 18 is not
>> > used by
>> > > eth0.
>> > > vpp# sh interface rx-
>> > placement
>> > > Thread 1 (vpp_wk_0):
>> > > node dpdk-input:
>> > > eth1 queue 0 (polling)
>> > > Thread 2 (vpp_wk_1):
>> > > node dpdk-input:
>> > > eth1 queue 1 (polling)
>> > > Thread 3 (vpp_wk_2):
>> > > node dpdk-input:
>> > > eth1 queue 2 (polling)
>> > > Thread 4 (vpp_wk_3):
>> > > node dpdk-input:
>> > > eth1 queue 3 (polling)
>> > > Thread 5 (vpp_wk_4):
>> > > node dpdk-input:
>> > > eth0 queue 0 (polling)
>> > > eth0 queue 2 (polling)
>> > > Thread 6 (vpp_wk_5):
>> > > node dpdk-input:
>> > > eth0 queue 3 (polling)
>> > > Thread 7 (vpp_wk_6):
>> > > node dpdk-input:
>> > > eth0 queue 1 (polling)
>> > > vpp#
>> > >
>> > >
>> > >
>> > > It seems there is a
>> > limitation on assigning
>> > > cores to nic.
>> > >
>> > > 1. I cannot allocate cores
>> > after 20 to
>> > > corelist-workers.
>> > >
>> > > 2. Cores after 18 cannot
>> be
>> > allocated to
>> > > nic.
>> > >
>> > >
>> > >
>> > > Is this some bug? Or, some
>> > undocumented
>> > > limitation?
>> > >
>> > >
>> > >
>> > > Thanks.
>> > >
>> > > Chuan
>> > >
>> > > -=-=-=-=-=-=-=-=-=-=-=-
>> > > Links: You receive all
>> > messages sent to this
>> > > group.
>> > >
>> > > View/Reply Online
>> (#14491):
>> > > https://lists.fd.io/g/vpp-dev/message/14491
>> > > Mute This Topic:
>> > > https://lists.fd.io/mt/41334952/675642
>> > > Group Owner: vpp-
>> > [email protected] <mailto:vpp-dev%[email protected]>
>> > > Unsubscribe:
>> > https://lists.fd.io/g/vpp-
>> > > dev/unsub [[email protected] <mailto:[email protected]>
>> > <mailto:[email protected] <mailto:[email protected]> > ]
>> > > -=-=-=-=-=-=-=-=-=-=-=-
>> > >
>> > >
>> > >
>> > > -=-=-=-=-=-=-=-=-=-=-=-
>> > > Links: You receive all messages sent to
>> this
>> > group.
>> > >
>> > > View/Reply Online (#14494):
>> > https://lists.fd.io/g/vpp-
>> > > dev/message/14494
>> > > Mute This Topic:
>> > https://lists.fd.io/mt/41334952/1991531
>> > > Group Owner: [email protected]
>> > <mailto:vpp-dev%[email protected]> <mailto:vpp- <mailto:vpp->
>> > > dev%[email protected] <mailto:dev%[email protected]> >
>> > > Unsubscribe: https://lists.fd.io/g/vpp-
>> > dev/unsub
>> > > [[email protected] <mailto:[email protected]>
>> > <mailto:[email protected] <mailto:[email protected]> > ]
>> > > -=-=-=-=-=-=-=-=-=-=-=-
>> > >
>> >
>> >
>>
>>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#14531): https://lists.fd.io/g/vpp-dev/message/14531
Mute This Topic: https://lists.fd.io/mt/41334952/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-