[vpp-dev] master branch build failed #vpp-dev

2021-08-25 Thread jiangxiaoming
Hi all,
Vpp master branch build failed, anyone have the save issue

> 
> 
> @@@ Configuring vpp in
> /home/dev/code/vpp/build-root/build-vpp_debug-native/vpp 
> -- The C compiler identification is GNU 4.8.5
> -- Check for working C compiler: /usr/lib64/ccache/cc
> -- Check for working C compiler: /usr/lib64/ccache/cc - works
> -- Detecting C compiler ABI info
> -- Detecting C compiler ABI info - done
> -- Detecting C compile features
> -- Detecting C compile features - done
> -- Performing Test compiler_flag_march_haswell
> -- Performing Test compiler_flag_march_haswell - Failed
> -- Performing Test compiler_flag_mtune_haswell
> -- Performing Test compiler_flag_mtune_haswell - Failed
> -- Performing Test compiler_flag_march_tremont
> -- Performing Test compiler_flag_march_tremont - Failed
> -- Performing Test compiler_flag_mtune_tremont
> -- Performing Test compiler_flag_mtune_tremont - Failed
> -- Performing Test compiler_flag_march_skylake_avx512
> -- Performing Test compiler_flag_march_skylake_avx512 - Failed
> -- Performing Test compiler_flag_mtune_skylake_avx512
> -- Performing Test compiler_flag_mtune_skylake_avx512 - Failed
> -- Performing Test compiler_flag_mprefer_vector_width_256
> -- Performing Test compiler_flag_mprefer_vector_width_256 - Failed
> -- Performing Test compiler_flag_march_icelake_client
> -- Performing Test compiler_flag_march_icelake_client - Failed
> -- Performing Test compiler_flag_mtune_icelake_client
> -- Performing Test compiler_flag_mtune_icelake_client - Failed
> -- Performing Test compiler_flag_mprefer_vector_width_512
> -- Performing Test compiler_flag_mprefer_vector_width_512 - Failed
> -- Looking for ccache
> -- Looking for ccache - found
> -- Performing Test compiler_flag_no_address_of_packed_member
> -- Performing Test compiler_flag_no_address_of_packed_member - Success
> -- Looking for pthread.h
> -- Looking for pthread.h - found
> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
> -- Check if compiler accepts -pthread
> -- Check if compiler accepts -pthread - yes
> -- Found Threads: TRUE
> -- Performing Test HAVE_FCNTL64
> -- Performing Test HAVE_FCNTL64 - Failed
> -- Found OpenSSL: /usr/lib64/libcrypto.so (found version "1.1.1i")
> -- The ASM compiler identification is GNU
> -- Found assembler: /usr/lib64/ccache/cc
> -- Looking for libuuid
> -- Found uuid in /usr/include
> -- libbpf headers not found - af_xdp plugin disabled
> -- Intel IPSecMB found:
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- dpdk plugin needs libdpdk.a library - found at
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libdpdk.a
> 
> -- Found DPDK 21.5.0 in
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- dpdk plugin needs numa library - found at /usr/lib64/libnuma.so
> -- linux-cp plugin needs libnl-3.so library - found at
> /usr/lib64/libnl-3.so
> -- linux-cp plugin needs libnl-route-3.so.200 library - found at
> /usr/lib64/libnl-route-3.so.200
> -- Found quicly 0.1.3-vpp in
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> -- rdma plugin needs libibverbs.a library - found at
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libibverbs.a
> 
> -- rdma plugin needs librdma_util.a library - found at
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/librdma_util.a
> 
> -- rdma plugin needs libmlx5.a library - found at
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libmlx5.a
> 
> -- Performing Test IBVERBS_COMPILES_CHECK
> -- Performing Test IBVERBS_COMPILES_CHECK - Success
> -- -- libdaq headers not found - snort3 DAQ disabled
> -- -- libsrtp2.a library not found - srtp plugin disabled
> -- tlsmbedtls plugin needs mbedtls library - found at
> /usr/lib64/libmbedtls.so
> -- tlsmbedtls plugin needs mbedx509 library - found at
> /usr/lib64/libmbedx509.so
> -- tlsmbedtls plugin needs mbedcrypto library - found at
> /usr/lib64/libmbedcrypto.so
> -- Looking for SSL_set_async_callback
> -- Looking for SSL_set_async_callback - not found
> -- Found picotls in
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/include
> and
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external/lib/libpicotls-core.a
> 
> -- subunit library not found - vapi tests disabled
> -- Found Python3: /usr/bin/python3.6 (found version "3.6.8") found
> components: Interpreter
> -- Configuration:
> VPP version         : 21.10-rc0~274-gee04de5
> VPP library version : 21.10
> GIT toplevel dir    : /home/dev/code/vpp
> Build type          : debug
> C flags             :
> Linker flags (apps) :
> Linker flags (libs) :
> Host processor      : x86_64
> Target processor    : x86_64
> Prefix path         : /opt/vpp/external/x86_64
> /home/dev/code/vpp/build-root/install-vpp_debug-native/external
> Install prefix      :
> /home/dev/code/vpp/build-root/install-vpp_debug-native/vpp
> -- Configuring 

Re: [vpp-dev] Packet processing time.

2021-08-25 Thread Venumadhav Josyula
Hi Mohammed / Dave,

How  would you measure the latency of packet ? for e.g clocks &
vector/call  for each node, can we measure it ?

Thanks,
Regards,
Venu

On Tue, 21 Apr 2020 at 16:54, Mohammed Hawari  wrote:

> Hi Chris,
>
> Evaluating packet processing time in software is a very challenging issue,
> as mentioned by Dave, it is likely to impact the performance we are trying
> to evaluate. I worked on that issue and have an unpublished, under review,
> academic paper proposing a solution using the NetFPGA-SUME platform.
> Basically, I built a custom FPGA design, mimicking a NIC capable of
> timestamping every packets at the ingress and the egress (immediately after
> packets arrivals from the wire, and immediately before the packets
> departures on the wire). I also wrote a DPDK driver for that NIC, and made
> it work with VPP, so that the latency introduced by (VPP+PCI-based DMA) can
> be evaluated. I played with this design and VPP in various configurations
> (l2-patch l2 crossconnect and l3 forward) and I think it could be an
> interesting tool to diagnose latency issues on a “per-packet” basis.
> Downside is, of course, from the perspective of VPP, this is a custom NIC,
> with a custom driver (not necessarily super-optimised), and the evaluated
> packet forwarding latency takes the driver’s performance into account.
>
> If you are interested in discussing this work, I can give you more details
> and resources in unicast, don’t hesitate to contact me :)
>
> Cheers,
>
> Mohammed Hawari
> Software Engineer & PhD student
> Cisco Systems
>
>
> On 18 Apr 2020, at 22:14, Dave Barach via lists.fd.io <
> dbarach=cisco@lists.fd.io> wrote:
>
> If you turn on the main loop dispatch event logs and look at the results
> in the g2 viewer [or dump them in ascii] you can make pretty accurate lap
> time estimates for any workload. Roughly speaking, packets take 1 lap time
> to arrive and then leave.
>
> The “circuit-node ” game produces one elog event per frame, so
> you can look at several million frame circuit times.
>
> Individually timestamping packets would be more precise, but calling
> clib_cpu_time_now(...) (rdtsc instrs on x86_64) twice per packet would
> almost certainly affect forwarding performance.
>
> See
> https://fd.io/docs/vpp/master/gettingstarted/developers/eventviewer.html
>
> /*?
> * Control event logging of api, cli, and thread barrier events
> * With no arguments, displays the current trace status.
> * Name the event groups you wish to trace or stop tracing.
> *
> * @cliexpar
> * @clistart
> * elog trace api cli barrier
> * elog trace api cli barrier disable
> * elog trace dispatch
> * elog trace circuit-node ethernet-input
> * elog trace
> * @cliend
> * @cliexcmd{elog trace [api][cli][barrier][disable]}
> ?*/
> /* *INDENT-OFF* */
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *Christian
> Hopps
> *Sent:* Saturday, April 18, 2020 3:14 PM
> *To:* vpp-dev 
> *Cc:* Christian Hopps 
> *Subject:* [vpp-dev] Packet processing time.
>
>
> The recent discussion on reference counting and barrier timing has got me
> interested in packet processing time. I realize there's a way to use "show
> runtime" along with knowledge of the arc a packet follows, but I'm curious
> if something more straight-forward has been attempted where packets are
> timestamped on ingress (or creation) and stats are collected on egress
> (transmission)?
>
> I also have an unrelated interest in hooking into the graph
> immediate-post-transmission -- I'd like to adjust an input queue size only
> when the packet that enqueued on it is actually transmitted on the wire,
> and not just handed off downstream on the arc -- this would be a likely the
> same place packet stat collection might occur. :)
>
> Thanks,
> Chris.
>
>
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20021): https://lists.fd.io/g/vpp-dev/message/20021
Mute This Topic: https://lists.fd.io/mt/73114130/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Issues adding ACL with binary_api

2021-08-25 Thread satish amara
I was able to run the same command on the centos platform when I build the 
image.   I am seeing this issue when I install the VPP package Downloading and 
Installing VPP — The Vector Packet Processor 21.06 documentation (fd.io) ( 
https://fd.io/docs/vpp/latest/gettingstarted/installing/#installing-on-centos )

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20020): https://lists.fd.io/g/vpp-dev/message/20020
Mute This Topic: https://lists.fd.io/mt/85093357/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Linux CP: crash in lcp_node.c

2021-08-25 Thread Matthew Smith via lists.fd.io
Hi Pim,

Responses are inline...

On Tue, Aug 24, 2021 at 4:47 AM Pim van Pelt  wrote:

> Hoi,
>
> I've noticed that when a linuxcp enabled VPP 21.06 with multiple threads
> receives many ARP requests, eventually it crashes in lcp_arp_phy_node in
> lcp_node.c:675 and :775 because we do a vlib_buffer_copy() which returns
> NULL, after which we try to dereference the result. How to repro:
> 1) create a few interfaces/subints and give them IP addresses in Linux and
> VPP. I made 5 phy subints and 5 subints on a bondethernet.
> 2) rapidly fping the Linux CP and at the same time continuously flush the
> neighbor cache on the Linux namespace:
> On the vpp machine in 'dataplane' namespace:
>   while :; do ip nei flush all; done
> On a Linux machine connected to VPP:
>   while :; do fping -c 1 -B 1 -p 10 10.1.1.2 10.1.2.2 10.1.3.2
> 10.1.4.2 10.1.5.2 10.0.1.2 10.0.2.2 10.0.3.2 10.0.4.2 10.0.5.2
> 2001:db8:1:1::2 2001:db8:1:2::2 2001:db8:1:3::2 2001:db8:1:4::2
> 2001:db8:1:5::2 2001:db8:0:1::2 2001:db8:0:2::2 2001:db8:0:3::2
> 2001:db8:0:4::2 2001:db8:0:5::2; done
>
> VPP will now be seeing lots of ARP traffic to and from the host. After a
> while, c0 or c1 from lcp_node.c:675 and lcp_node.c:775 will be NULL and
> cause a crash.
> I temporarily worked around this by simply adding:
>
> @@ -675,6 +675,10 @@ VLIB_NODE_FN (lcp_arp_phy_node)
>
>   c0 = vlib_buffer_copy (vm, b0);
>
>   vlib_buffer_advance (b0, len0);
>
>
>
> + // pim(2021-08-24) -- address SIGSEGV when copy returns
> NULL
>
> + if (!c0)
>
> +   continue;
>
> +
>
>   /* Send to the host */
>
>   vnet_buffer (c0)->sw_if_index[VLIB_TX] =
>
> lip0->lip_host_sw_if_index;
>
> but I'm not very comfortable in this part of VPP, and I'm sure there's a
> better way to catch the buffer copy failing?
>

No, checking whether the return value is null is the correct way to detect
failure.



> I haven't quite understood this code yet, but shouldn't we free c0 and c1
> in these functions?
>

No, c0 and c1 are enqueued to another node (interface-output). The buffers
are freed after being transmitted or dropped by subsequent nodes. Freeing
them in this node while also enqueuing them would result in problems.



> It seems that when I'm doing my rapid ping/arp/flush exercise above, VPP
> is slowly consuming more memory (as seen by show memory main-heap; all 4
> threads are monotonously growing by a few hundred kB per minute of runtime).
>

I made a quick attempt to reproduce the issue and was unsuccessful. Though
I did not use a bond interface or subinterfaces, just physical interfaces.

How many buffers are being allocated (vppctl show buffers)? Does the issue
occur if you only send IPv4 packets instead of both IPv4 and IPv6? Are
other packets besides the ICMP and ARP being forwarded while you're running
this test? Is there any other control plane activity occurring during the
test (E.g. BGP adding routes)?



> If somebody could help me take a look, I'd appreciate it.
>

 It would be better to make your patch like this:

  if (c0)
{
  /* Send to the host */
  vnet_buffer (c0)->sw_if_index[VLIB_TX] =
lip0->lip_host_sw_if_index;
  reply_copies[n_copies++] = vlib_get_buffer_index (vm,
c0);
}

When you do the opposite ('if (!c0) continue;'), you skip the call to
vlib_validate_buffer_enqueue_x2() at the end of the loop body which would
enqueue the original buffers to the next node. So those buffers will leak
and the issue will be exacerbated.

Thanks,
-Matt

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20019): https://lists.fd.io/g/vpp-dev/message/20019
Mute This Topic: https://lists.fd.io/mt/85107134/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] linux-cp: completing the netlink listener

2021-08-25 Thread Pim van Pelt
Hoi list,

After completing task1-3 of the Linux CP work (see my mail from Aug 12 to
the list, I've now turned my attention to the Netlink Listener. I've
documented that journey in
https://ipng.ch/s/articles/2021/08/25/vpp-4.html which explains my test
setup, the results with the new plugin, based on an older patch in
https://gerrit.fd.io/r/c/vpp/+/31122.

Mirroring Linux changes into VPP are complete with exception of routes
(although that's
already done by Matt/Neale in #31122, so not much work for me to do. As a
teaser, here's a screencast showing me committing a few commands to VPP
[1], with all further interaction done solely with `ip` in the Linux
namespace [2].

Here's a teaser screencast for those following along:
https://asciinema.org/a/432243

I'll be offering my contribution in order:
1) review/commit Part 1 (Linux interface plumbing) in #33481 <- we are here
:)
2) mirroring VPP changes into Linux
3) auto-creating sub-interfaces from VPP in Linux
4) this change, mirroring Linux changes into VPP.

I look forward to your review!. I hope to make the 21.10 cutoff date for
these changes, as I think it'll make the Linux CP plugin much more
appealing for users!

groet,
Pim

[1] VPP configuration that creates two LCP devices:
create bond mode lacp load-balance l34
bond add BondEthernet0 TenGigabitEthernet3/0/2
bond add BondEthernet0 TenGigabitEthernet3/0/3
set interface state TenGigabitEthernet3/0/2 up
set interface state TenGigabitEthernet3/0/3 up
lcp default netns dataplane
lcp lcp-sync on
lcp lcp-auto-subint on
lcp create TenGigabitEthernet3/0/0 host-if e0
lcp create BondEthernet0 host-if be0

[2] Linux CP side configuration, copied into VPP:
IP="sudo ip netns exec dataplane ip"
$IP link add link e0 name e0.1234 type vlan id 1234
$IP link add link e0.1234 name e0.1235 type vlan id 1000
$IP link add link e0 name e0.1236 type vlan id 2345 proto 802.1ad
$IP link add link e0.1236 name e0.1237 type vlan id 1000
$IP link set e0 up mtu 9000

$IP addr add 10.0.1.1/30 dev e0
$IP addr add 2001:db8:0:1::1/64 dev e0
$IP addr add 10.0.2.1/30 dev e0.1234
$IP addr add 2001:db8:0:2::1/64 dev e0.1234
$IP addr add 10.0.3.1/30 dev e0.1235
$IP addr add 2001:db8:0:3::1/64 dev e0.1235
$IP addr add 10.0.4.1/30 dev e0.1236
$IP addr add 2001:db8:0:4::1/64 dev e0.1236
$IP addr add 10.0.5.1/30 dev e0.1237
$IP addr add 2001:db8:0:5::1/64 dev e0.1237
$IP link add link be0 name be0.1234 type vlan id 1234
$IP link add link be0.1234 name be0.1235 type vlan id 1000
$IP link add link be0 name be0.1236 type vlan id 2345 proto 802.1ad
$IP link add link be0.1236 name be0.1237 type vlan id 1000
$IP link set be0 up mtu 9000

$IP addr add 10.1.1.1/30 dev be0
$IP addr add 2001:db8:1:1::1/64 dev be0
$IP addr add 10.1.2.1/30 dev be0.1234
$IP addr add 2001:db8:1:2::1/64 dev be0.1234
$IP addr add 10.1.3.1/30 dev be0.1235
$IP addr add 2001:db8:1:3::1/64 dev be0.1235
$IP addr add 10.1.4.1/30 dev be0.1236
$IP addr add 2001:db8:1:4::1/64 dev be0.1236
$IP addr add 10.1.5.1/30 dev be0.1237
$IP addr add 2001:db8:1:5::1/64 dev be0.1237

-- 
Pim van Pelt 
PBVP1-RIPE - http://www.ipng.nl/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20018): https://lists.fd.io/g/vpp-dev/message/20018
Mute This Topic: https://lists.fd.io/mt/85134324/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] ARP learning via workers and thread barrier locking

2021-08-25 Thread Satya Murthy
Hi All,

As per my current understanding of the code for ARP learning:

1. ARP is learned on a VPP worker
2. Send to main thread via rpc, by adding to the rcp queue
3. main thread picks up this from rpc queue
4. main thread takes thread barrier lock and updates the ARP table

In the step4, we are taking a thread barrier lock for ARP learing ( might be 
due to the ARP module being non-thread-safe).
We are observing that this is resulting into worker threads being locked up for 
few milli/microseconds, causing some tail drops.

Since, ARP learning is quite a common activity in the network, isnt this 
causing issue for VPP in general. We are seeing tail drops in VPP workers 
during ARP spikes.
Any ideas on how to handle this (or) making ARP learning as lock-less, so that 
it can be done from any worker ?

--
Thanks & Regards,
Murthy

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20017): https://lists.fd.io/g/vpp-dev/message/20017
Mute This Topic: https://lists.fd.io/mt/85132512/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Issues adding ACL with binary_api

2021-08-25 Thread Satya Murthy
vl_api_acl_add_replace_reply_t_handler: *73* : ACL index: 0

*73* pointing to the error code VNET_API_ERROR_INVALID_ARGUMENT.  It gives some 
clue.

--
Thanks & Regards,
Murthy

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20016): https://lists.fd.io/g/vpp-dev/message/20016
Mute This Topic: https://lists.fd.io/mt/85093357/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Tx and Rx Queue placement for memif

2021-08-25 Thread Swarup Sengupta via lists.fd.io
[Edited Message Follows]

We are trying to build an application which communicates with vpp over memif 
interface. VPP has 8 worker threads, and IP address have been pinned to each of 
these threads (via rx-placement). Out application connects to VPP using memif 
(VPP Master & App Slave), with 8 queues.

On our application, we have 8 worker threads, reading from one of the 8 memif 
queues, such that

· rx-queue-1 is on thread 1

· rx-queue-2 on thread-2 and so on.

Each of these thread will write to the associated queue-id of the memif rx, e.g.

· thread-1 writes on tx-queue-1

· thread-2 writes on tx-queue-2

>From another post in this mail list we understand that the tx-queues are 
>placed in sequence of the worker threads, like tx-0 on main thread, tx-1 on 
>wk_1, tx-2 or wk-3 and so on. So is it safe to assume that if we place our 
>memif rx-queues respectively in the same sequence, then the packet belonging 
>to an IP address will land on same thread, e.g

IP_1 à vpp_wk_1 à memif_tx_1  :   memif_rx_queue_1 à app_thread 1 à 
memif_tx_queue_1  :  memif_rx_1 à vpp_wk_1

Is this understanding correct ? If I am missing something else, or there is a 
better way to achieve this, please suggest.

Thanks,

Swarup.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20015): https://lists.fd.io/g/vpp-dev/message/20015
Mute This Topic: https://lists.fd.io/mt/85132356/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Tx and Rx Queue placement for memif

2021-08-25 Thread Swarup Sengupta via lists.fd.io
We are trying to build an application which communicates with vpp over memif 
interface. VPP has 8 worker threads, and IP address have been pinned to each of 
these threads (via rx-placement). Out application connects to VPP using memif 
(VPP Master & App Slave), with 8 queues.

On our application, we have 8 worker threads, reading from one of the 8 memif 
queues, such that

· rx-queue-1 is on thread 1

· rx-queue-2 on thread-2 and so on.
Each of these thread will write to the associated queue-id of the memif rx, e.g.

· thread-1 writes on tx-queue-1

· thread-2 writes on tx-queue-2

>From another post in this mail list we understand that the tx-queues are 
>placed in sequence of the worker threads, like tx-0 on main thread, tx-1 on 
>wk_1, tx-2 or wk-3 and so on. So is it safe to assume that if we place our 
>memif rx-queues respectively in the same sequence, then the packet belonging 
>to an IP address will land on same thread, e.g

IP_1 --> vpp_wk_1 --> memif_tx_1  :   memif_rx_queue_1 --> app_thread 1--> 
memif_tx_queue_1  :  memif_rx_1 --> vpp_wk_1

Is this understanding correct ? If I am missing something else, or there is a 
better way to achieve this, please suggest.

Thanks,
Swarup.
"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s). 
are confidential and may be privileged. If you are not the intended recipient. 
you are hereby notified that any 
review. re-transmission. conversion to hard copy. copying. circulation or other 
use of this message and any attachments is 
strictly prohibited. If you are not the intended recipient. please notify the 
sender immediately by return email. 
and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. 
The company cannot accept responsibility for any loss or damage arising from 
the use of this email or attachment."

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20015): https://lists.fd.io/g/vpp-dev/message/20015
Mute This Topic: https://lists.fd.io/mt/85132356/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-