Looping in JohnD, who may be able to help... :)

Ed

On Thu, Feb 1, 2018 at 7:43 AM adarsh m via vpp-dev <vpp-dev@lists.fd.io>
wrote:

> Hi,
>
> After removing socket-mem now vpp is stable but when we try to connect
> through CLI connection is refused.
>
>
> ubuntu@vasily:~$ sudo service vpp status
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: active (running) since Thu 2018-02-01 21:42:23 CST; 1s ago
>   Process: 49787 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 49793 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 49790 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 49796 (vpp)
>    CGroup: /system.slice/vpp.service
>            └─49796 /usr/bin/vpp -c /etc/vpp/startup.conf
>
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> Feb 01 19:37:16 vasily vpp[43454]: /usr/bin/vpp[43454]: dpdk_config:1240:
> EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
> 0002:f9:00.0 --master-lcore 0 --socket-mem 1024,
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: dpdk_config:1240: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
> 0002:f9:00.0 --master-lcore 0 --socket-mem 1024,0,0,0
> Feb 01 19:37:16 vasily vpp[43454]: EAL: VFIO support initialized
> Feb 01 19:37:16 vasily vnet[43454]: EAL: VFIO support initialized
>
>
> *ubuntu@vasily:~$ sudo vppctl show intclib_socket_init: connect (fd 3,
> '/run/vpp/cli.sock'): Connection refusedubuntu@vasily:~$ *
>
>
> *Startup.conf :*
> unix {
>   nodaemon
>   log /tmp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
>   gid vpp
> }
>
> api-trace {
> ## This stanza controls binary API tracing. Unless there is a very strong
> reason,
> ## please leave this feature enabled.
>   on
> ## Additional parameters:
> ##
> ## To set the number of binary API trace records in the circular buffer,
> configure nitems
> ##
> ## nitems <nnn>
> ##
> ## To save the api message table decode tables, configure a filename.
> Results in /tmp/<filename>
> ## Very handy for understanding api message changes between versions,
> identifying missing
> ## plugins, and so forth.
> ##
> ## save-api-table <filename>
> }
>
> api-segment {
>   gid vpp
> }
>
> cpu {
>     ## In the VPP there is one main thread and optionally the user can
> create worker(s)
>     ## The main thread and worker thread(s) can be pinned to CPU core(s)
> manually or automatically
>
>     ## Manual pinning of thread(s) to CPU core(s)
>
>     ## Set logical CPU core where main thread runs
>     # main-core 1
>
>     ## Set logical CPU core(s) where worker threads are running
>     # corelist-workers 2-3,18-19
>
>     ## Automatic pinning of thread(s) to CPU core(s)
>
>     ## Sets number of CPU core(s) to be skipped (1 ... N-1)
>     ## Skipped CPU core(s) are not used for pinning main thread and
> working thread(s).
>     ## The main thread is automatically pinned to the first available CPU
> core and worker(s)
>     ## are pinned to next free CPU core(s) after core assigned to main
> thread
>     # skip-cores 4
>
>     ## Specify a number of workers to be created
>     ## Workers are pinned to N consecutive CPU cores while skipping
> "skip-cores" CPU core(s)
>     ## and main thread's CPU core
>     # workers 2
>
>     ## Set scheduling policy and priority of main and worker threads
>
>     ## Scheduling policy options are: other (SCHED_OTHER), batch
> (SCHED_BATCH)
>     ## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR)
>     # scheduler-policy fifo
>
>     ## Scheduling priority is used only for "real-time policies (fifo and
> rr),
>     ## and has to be in the range of priorities supported for a particular
> policy
>     # scheduler-priority 50
> }
>
>
>
> * dpdk {              dev 0002:f9:00.0*
>     ## Change default settings for all intefaces
>     # dev default {
>         ## Number of receive queues, enables RSS
>         ## Default is 1
>         # num-rx-queues 3
>
>         ## Number of transmit queues, Default is equal
>         ## to number of worker threads or 1 if no workers treads
>         # num-tx-queues 3
>
>         ## Number of descriptors in transmit and receive rings
>         ## increasing or reducing number can impact performance
>         ## Default is 1024 for both rx and tx
>         # num-rx-desc 512
>         # num-tx-desc 512
>
>         ## VLAN strip offload mode for interface
>         ## Default is off
>         # vlan-strip-offload on
>     # }
>
>     ## Whitelist specific interface by specifying PCI address
>     # dev 0000:02:00.0
>
>     ## Whitelist specific interface by specifying PCI address and in
>     ## addition specify custom parameters for this interface
>     # dev 0000:02:00.1 {
>     #    num-rx-queues 2
>     # }
>
>     ## Specify bonded interface and its slaves via PCI addresses
>     ##
>         ## Bonded interface in XOR load balance mode (mode 2) with L3 and
> L4 headers
>     # vdev
> eth_bond0,mode=2,slave=0000:02:00.0,slave=0000:03:00.0,xmit_policy=l34
>     # vdev
> eth_bond1,mode=2,slave=0000:02:00.1,slave=0000:03:00.1,xmit_policy=l34
>     ##
>     ## Bonded interface in Active-Back up mode (mode 1)
>     # vdev eth_bond0,mode=1,slave=0000:02:00.0,slave=0000:03:00.0
>     # vdev eth_bond1,mode=1,slave=0000:02:00.1,slave=0000:03:00.1
>
>     ## Change UIO driver used by VPP, Options are: igb_uio, vfio-pci,
>     ## uio_pci_generic or auto (default)
>     # uio-driver vfio-pci
>
>     ## Disable mutli-segment buffers, improves performance but
>     ## disables Jumbo MTU support
>     # no-multi-seg
>
>     ## Increase number of buffers allocated, needed only in scenarios with
>     ## large number of interfaces and worker threads. Value is per CPU
> socket.
>     ## Default is 16384
>     # num-mbufs 128000
>         num-mbufs 4095
>     ## Change hugepages allocation per-socket, needed only if there is
> need for
>     ## larger number of mbufs. Default is 256M on each detected CPU socket
>     # socket-mem 2048,2048
>
>     ## Disables UDP / TCP TX checksum offload. Typically needed for use
>     ## faster vector PMDs (together with no-multi-seg)
>     # no-tx-checksum-offload
>  }
>
> # Adjusting the plugin path depending on where the VPP plugins are:
> #plugins
> #{
> #    path /home/bms/vpp/build-root/install-vpp-native/vpp/lib64/vpp_plugins
> #}
>
> # Alternate syntax to choose plugin path
> #plugin_path
> /home/bms/vpp/build-root/install-vpp-native/vpp/lib64/vpp_plugins
>
>
> On Thursday 1 February 2018, 7:00:18 PM IST, adarsh m via vpp-dev <
> vpp-dev@lists.fd.io> wrote:
>
>
> Support 2 channel 32core processor(ARM64)
>
>
>
> On Thursday 1 February 2018, 6:47:31 PM IST, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>
> You also added socket-mem which is pretty bad idea, try without.
> If that doesn't help then you will need to run VPP form console and
> possibly use gdb to collect more details.
>
> Which ARM board is that?
>
>
> On 1 Feb 2018, at 14:15, adarsh m <addi.ada...@yahoo.in> wrote:
>
> Hi,
>
> This is on ARM board. and yes i have modified the specific pcie address to
> added in startup.conf
>
>  dpdk {
>        socket-mem 1024
>        dev 0002:f9:00.0
>
> }
>
>
> On Thursday 1 February 2018, 5:20:59 PM IST, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>
> Please keep mailing list in CC.
>
> Those lines doesn't show that anything is wrong...
>
> Is this 4 socket computer? Have you modified startup.conf?
>
>
> On 1 Feb 2018, at 12:40, adarsh m <addi.ada...@yahoo.in> wrote:
>
> Hi,
>
> Very sorry pls check the complete one.
>
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> Feb 01 19:37:16 vasily vpp[43454]: /usr/bin/vpp[43454]: dpdk_config:1240:
> EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
> 0002:f9:00.0 --master-lcore 0 --socket-mem 1024,
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: dpdk_config:1240: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
> 0002:f9:00.0 --master-lcore 0 --socket-mem 1024,0,0,0
> Feb 01 19:37:16 vasily vpp[43454]: EAL: VFIO support initialized
> Feb 01 19:37:16 vasily vnet[43454]: EAL: VFIO support initialized
>
>
>
>
> On Thursday 1 February 2018, 4:48:35 PM IST, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>
>
> Unfortunately log you provided is incomplete and truncated so I cannot
> help much....
>
> On 1 Feb 2018, at 11:59, adarsh m <addi.ada...@yahoo.in> wrote:
>
> Hi,
>
> I checked hugepage and it was 0, so i freed this and increased it to 5120
>
> ubuntu@vasily:~$ sudo -i
> root@vasily:~# echo 5120 > /proc/sys/vm/nr_hugepages
>
>
> now the previous error is not occurring when i start but VPP is not
> stable, it'll become dead after a few sceonds from start.
>
> Logs :
> ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
> HugePages_Free:     5120
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$ sudo service vpp status
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: inactive (dead) since Thu 2018-02-01 18:50:46 CST; 5min ago
>   Process: 42736 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 42731 ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf
> (code=exited, status=0/SUCCESS)
>   Process: 42728 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 42726 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 42731 (code=exited, status=0/SUCCESS)
>
> Feb 01 18:50:46 vasily systemd[1]: vpp.service: Service hold-off time
> over, scheduling restart.
> Feb 01 18:50:46 vasily systemd[1]: Stopped vector packet processing engine.
> Feb 01 18:50:46 vasily systemd[1]: vpp.service: Start request repeated too
> quickly.
> Feb 01 18:50:46 vasily systemd[1]: Failed to start vector packet
> processing engine.
> Feb 01 18:56:12 vasily systemd[1]: Stopped vector packet processing engine.
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$ sudo service vpp start
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$ sudo service vpp status
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: active (running) since Thu 2018-02-01 18:56:49 CST; 298ms ago
>   Process: 42857 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 42863 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 42860 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 42866 (vpp)
>    CGroup: /system.slice/vpp.service
>            └─42866 /usr/bin/vpp -c /etc/vpp/startup.conf
>
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.s
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plu
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> Feb 01 18:56:49 vasily vpp[42866]: /usr/bin/vpp[42866]: dpdk_config:1240:
> EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --f
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: dpdk_config:1240: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix v
> Feb 01 18:56:49 vasily vpp[42866]: EAL: VFIO support initialized
> Feb 01 18:56:49 vasily vnet[42866]: EAL: VFIO support initialized
> ...skipping...
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: active (running) since Thu 2018-02-01 18:56:49 CST; 298ms ago
>   Process: 42857 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 42863 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 42860 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 42866 (vpp)
>    CGroup: /system.slice/vpp.service
>            └─42866 /usr/bin/vpp -c /etc/vpp/startup.conf
>
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.s
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plu
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> Feb 01 18:56:49 vasily vpp[42866]: /usr/bin/vpp[42866]: dpdk_config:1240:
> EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --f
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: dpdk_config:1240: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix v
> Feb 01 18:56:49 vasily vpp[42866]: EAL: VFIO support initialized
> Feb 01 18:56:49 vasily vnet[42866]: EAL: VFIO support initialized
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
>
> ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
> HugePages_Free:     2335
> ubuntu@vasily:~$ sudo service vpp status
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: inactive (dead) since Thu 2018-02-01 18:56:56 CST; 63ms ago
>   Process: 42917 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 42914 ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf
> (code=exited, status=0/SUCCESS)
>   Process: 42911 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 42908 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 42914 (code=exited, status=0/SUCCESS)
>
> Feb 01 18:56:56 vasily systemd[1]: vpp.service: Service hold-off time
> over, scheduling restart.
> Feb 01 18:56:56 vasily systemd[1]: Stopped vector packet processing engine.
> Feb 01 18:56:56 vasily systemd[1]: vpp.service: Start request repeated too
> quickly.
> Feb 01 18:56:56 vasily systemd[1]: Failed to start vector packet
> processing engine.
> ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
> HugePages_Free:     5120
> ubuntu@vasily:~$
>
>
>
> On Wednesday 31 January 2018, 9:30:19 PM IST, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>
>
> On 31 Jan 2018, at 10:34, adarsh m via vpp-dev <vpp-dev@lists.fd.io>
> wrote:
>
> Hi,
>
> Pls check i am trying to bring up vpp with interface on ARM server but
> facing issue while doing so,
>
> pls let me know if there is any existing issue or method to correct this
> issue.
>
>
> ubuntu@vasily:~$ sudo service vpp status
> [sudo] password for ubuntu:
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enab
>    Active: active (running) since Mon 2018-01-29 22:07:02 CST; 19h ago
>   Process: 2461 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status
>   Process: 2453 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/
>  Main PID: 2472 (vpp_main)
>    CGroup: /system.slice/vpp.service
>            └─2472 /usr/bin/vpp -c /etc/vpp/startup.conf
>
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create
> dpdk_mbuf_
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING:
> Failed
> Jan 29 22:07:05 vasily vnet[2472]: unix_physmem_region_iommu_register:
> ioctl (VF
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create
> dpdk_mbuf_
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING:
> Failed
> Jan 29 22:07:05 vasily vnet[2472]: unix_physmem_region_iommu_register:
> ioctl (VF
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create
> dpdk_mbuf_
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING:
> Failed
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_ipsec_process:1011: not enough
> DPDK cryp
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_lib_init:221: DPDK drivers found
> no port
>
>
>
> Looks like hugepages issue. can you show full log? What you pasted above
> is truncated...
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to