Hi John,

I checked i Adarsh setup with uncomment api-segment.

VPP is coming up now.

But the real issue "VPP not able to get interface in VM" still occur.

Below is log:
csit@dut1:~$ sudo service vpp status
[sudo] password for csit:
● vpp.service - vector packet processing engine
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset:
enabled)
   Active: active (running) since Tue 2018-02-27 18:02:14 CST; 4h 24min ago
  Process: 23045 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
  Process: 23131 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
status=0/SUCCESS)
  Process: 23127 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
/dev/shm/vpe-api (code=exited, status=0/SUCCESS)
 Main PID: 23136 (vpp_main)
    Tasks: 5
   Memory: 92.7M
      CPU: 56.166s
   CGroup: /system.slice/vpp.service
           └─23136 /usr/bin/vpp -c /etc/vpp/startup.conf

Feb 27 18:02:14 dut1 vpp[23136]: EAL:   Invalid NUMA socket, default to 0
Feb 27 18:02:14 dut1 vpp[23136]: EAL: Cannot open
/sys/bus/pci/devices/0000:02:01.0/resource0: No such file or directory
Feb 27 18:02:14 dut1 vnet[23136]: EAL:   Invalid NUMA socket, default to 0
Feb 27 18:02:14 dut1 vpp[23136]: EAL: Requested device 0000:02:01.0 cannot
be used
*Feb 27 18:02:14 dut1 vnet[23136]: EAL: Cannot open
/sys/bus/pci/devices/0000:02:01.0/resource0: No such file or directory*
*Feb 27 18:02:14 dut1 vnet[23136]: EAL: Requested device 0000:02:01.0
cannot be used*
Feb 27 18:02:14 dut1 vpp[23136]: unix_physmem_region_iommu_register: ioctl
(VFIO_IOMMU_MAP_DMA): Invalid argument
Feb 27 18:02:14 dut1 vpp[23136]: linux_epoll_file_update:119: epoll_ctl:
Operation not permitted (errno 1)
Feb 27 18:02:14 dut1 vpp[23136]: 0: dpdk_ipsec_process:1012: not enough
DPDK crypto resources, default to OpenSSL
Feb 27 18:02:14 dut1 vpp[23136]: 0: dpdk_lib_init:222: DPDK drivers found
no ports...
csit@dut1:~$

Any idea what can cause this problem and how to resolve.

*We are using Qemu with virt-manager on ARM server.*
ubuntu@ubuntu:~$ virsh -c qemu:///system version --daemon
setlocale: No such file or directory
Compiled against library: libvirt 1.3.1
Using library: libvirt 1.3.1
Using API:* QEMU 1.3.1*
Running hypervisor: QEMU 2.5.0
Running against daemon: 1.3.1



Thanks & Regards
khem

On Tue, Feb 27, 2018 at 7:31 PM, John DeNisco <jdeni...@cisco.com> wrote:

>
>
> Can you do the same, but as root.
>
>
>
> sudo /usr/bin/vpp -c /etc/vpp/startup.conf
>
>
>
> Also, gid vpp can be in the api-segment.
>
>
>
> I have attached my startup.conf for reference. You don’t need the dpdk
> section.
>
>
>
>
>
>
>
> *From: *adarsh m <addi.ada...@yahoo.in>
> *Date: *Tuesday, February 27, 2018 at 5:11 AM
> *To: *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>, John DeNisco <
> jdeni...@cisco.com>
> *Cc: *Appana Prasad <pra...@huawei.com>, "lukai (D)" <luk...@huawei.com>,
> "Damjan Marion (damarion)" <damar...@cisco.com>
>
> *Subject: *Re: [vpp-dev] Error when trying to add interface to vpp on ARM
> server.
>
>
>
>
>
> Hi John,
>
>
>
> I have Downgraded Qemu and Libvirt to atleast make VM UP.
>
>
>
> ubuntu@ubuntu:~$ virsh -c qemu:///system version --daemon
> setlocale: No such file or directory
> Compiled against library: libvirt 1.3.1
> Using library: libvirt 1.3.1
> Using API: QEMU 1.3.1
> Running hypervisor: QEMU 2.5.0
> Running against daemon: 1.3.1
>
> ubuntu@ubuntu:~$
>
> Commented "nodaemon" and added "interactive" in startup.conf but gid vpp
> were in 2 places unix and api-segment.
>
>
>
> When only in unix its commented then vpp starts but interface does'nt come
> up
>
>
>
> And when i comment either in api-segment or both then vpp will fail to
> start,
>
>
>
> I am attaching both the Logs below, pls go through and let us know about
> this issue.
>
>
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------------------------------------------------
> ------------------
>
> Also using Qemu and Libvirt to latest is failing to bring up VM itself.
>
>
>
> I am using Latest Qemu 2.11 and since i am using Virsh i updated Libvirt
> to latest 3.6.0 (Ubuntu 16.04)
>
> ubuntu@ubuntu:~$  virsh -c qemu:///system version --daemon
> setlocale: No such file or directory
> Compiled against library: libvirt 3.6.0
> Using library: libvirt 3.6.0
> Using API: QEMU 3.6.0
> Running hypervisor: QEMU 2.10.1
> Running against daemon: 3.6.0
>
> Both versions were ideally not having an official release for ubuntu 16.04.
>
>
>
>
>
> But when i try and do virsh-install it'll get stuck after boot maybe crash
> same as gabriel.
>
> *Logs :*
>
> EFI stub: Booting Linux Kernel...
> EFI stub: Using DTB from configuration table
> EFI stub: Exiting boot services and installing virtual address map...
>
>
> * Virsh command :*
>
> sudo virt-install --name dut2 --ram 4096 --disk path=dut2.img,size=30
> --vcpus 2 --os-type linux --os-variant generic --cdrom
> './ubuntu-16.04.3-server-arm64.iso' --network default
>
>
>
>
>
> On Thursday 22 February 2018, 6:49:54 PM IST, John DeNisco <
> jdeni...@cisco.com> wrote:
>
>
>
>
>
> Hi Adarsh,
>
>
>
> Would you be able to try the following?
>
>
>
> In startup config for now, could you add the following change in the unix
> section of the vpp startup.
>
>
>
>   # nodaemon
>
>   interactive
>
>
>
> Also, I see you have the line gid vpp. I am not sure what that does. Most
> of the configs I see do not have that. You could try removing that line.
>
>
>
> Then run VPP manually like this so we might get more error information.
>
>
>
> # cat /proc/meminfo – Be sure you still have some hugepages, sometimes if
> dpdk crashes it doesn’t return them.
>
> # service vpp stop
>
> # ps -eaf | grep vpp – Make sure there are no other VPP processes still
> running
>
> # /usr/bin/vpp -c /etc/vpp/startup.conf
>
>
>
> Then send us the results from the entire startup.
>
>
>
> Thanks,
>
>
>
> John
>
>
>
>
>
> *From: *<vpp-dev-boun...@lists.fd.io> on behalf of adarsh m via vpp-dev <
> vpp-dev@lists.fd.io>
> *Reply-To: *adarsh m <addi.ada...@yahoo.in>
> *Date: *Thursday, February 1, 2018 at 8:43 AM
> *To: *"Damjan Marion (damarion)" <damar...@cisco.com>, adarsh m via
> vpp-dev <vpp-dev@lists.fd.io>
> *Subject: *Re: [vpp-dev] Error when trying to add interface to vpp on ARM
> server.
>
>
>
> Hi,
>
>
>
> After removing socket-mem now vpp is stable but when we try to connect
> through CLI connection is refused.
>
>
>
>
>
> ubuntu@vasily:~$ sudo service vpp status
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: active (running) since Thu 2018-02-01 21:42:23 CST; 1s ago
>   Process: 49787 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 49793 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 49790 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 49796 (vpp)
>    CGroup: /system.slice/vpp.service
>            └─49796 /usr/bin/vpp -c /etc/vpp/startup.conf
>
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> Feb 01 19:37:16 vasily vpp[43454]: /usr/bin/vpp[43454]: dpdk_config:1240:
> EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
> 0002:f9:00.0 --master-lcore 0 --socket-mem 1024,
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: dpdk_config:1240: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
> 0002:f9:00.0 --master-lcore 0 --socket-mem 1024,0,0,0
> Feb 01 19:37:16 vasily vpp[43454]: EAL: VFIO support initialized
> Feb 01 19:37:16 vasily vnet[43454]: EAL: VFIO support initialized
>
>
>
> *ubuntu@vasily:~$ sudo vppctl show int clib_socket_init: connect (fd 3,
> '/run/vpp/cli.sock'): Connection refused ubuntu@vasily:~$ *
>
>
>
> *Startup.conf :*
>
> unix {
>   nodaemon
>   log /tmp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
>   gid vpp
> }
>
> api-trace {
> ## This stanza controls binary API tracing. Unless there is a very strong
> reason,
> ## please leave this feature enabled.
>   on
> ## Additional parameters:
> ##
> ## To set the number of binary API trace records in the circular buffer,
> configure nitems
> ##
> ## nitems <nnn>
> ##
> ## To save the api message table decode tables, configure a filename.
> Results in /tmp/<filename>
> ## Very handy for understanding api message changes between versions,
> identifying missing
> ## plugins, and so forth.
> ##
> ## save-api-table <filename>
> }
>
> api-segment {
>   gid vpp
> }
>
> cpu {
>     ## In the VPP there is one main thread and optionally the user can
> create worker(s)
>     ## The main thread and worker thread(s) can be pinned to CPU core(s)
> manually or automatically
>
>     ## Manual pinning of thread(s) to CPU core(s)
>
>     ## Set logical CPU core where main thread runs
>     # main-core 1
>
>     ## Set logical CPU core(s) where worker threads are running
>     # corelist-workers 2-3,18-19
>
>     ## Automatic pinning of thread(s) to CPU core(s)
>
>     ## Sets number of CPU core(s) to be skipped (1 ... N-1)
>     ## Skipped CPU core(s) are not used for pinning main thread and
> working thread(s).
>     ## The main thread is automatically pinned to the first available CPU
> core and worker(s)
>     ## are pinned to next free CPU core(s) after core assigned to main
> thread
>     # skip-cores 4
>
>     ## Specify a number of workers to be created
>     ## Workers are pinned to N consecutive CPU cores while skipping
> "skip-cores" CPU core(s)
>     ## and main thread's CPU core
>     # workers 2
>
>     ## Set scheduling policy and priority of main and worker threads
>
>     ## Scheduling policy options are: other (SCHED_OTHER), batch
> (SCHED_BATCH)
>     ## idle (SCHED_IDLE), fifo (SCHED_FIFO), rr (SCHED_RR)
>     # scheduler-policy fifo
>
>     ## Scheduling priority is used only for "real-time policies (fifo and
> rr),
>     ## and has to be in the range of priorities supported for a particular
> policy
>     # scheduler-priority 50
> }
>
>
>
> * dpdk {                dev 0002:f9:00.0*
>     ## Change default settings for all intefaces
>     # dev default {
>         ## Number of receive queues, enables RSS
>         ## Default is 1
>         # num-rx-queues 3
>
>         ## Number of transmit queues, Default is equal
>         ## to number of worker threads or 1 if no workers treads
>         # num-tx-queues 3
>
>         ## Number of descriptors in transmit and receive rings
>         ## increasing or reducing number can impact performance
>         ## Default is 1024 for both rx and tx
>         # num-rx-desc 512
>         # num-tx-desc 512
>
>         ## VLAN strip offload mode for interface
>         ## Default is off
>         # vlan-strip-offload on
>     # }
>
>     ## Whitelist specific interface by specifying PCI address
>     # dev 0000:02:00.0
>
>     ## Whitelist specific interface by specifying PCI address and in
>     ## addition specify custom parameters for this interface
>     # dev 0000:02:00.1 {
>     #    num-rx-queues 2
>     # }
>
>     ## Specify bonded interface and its slaves via PCI addresses
>     ##
>         ## Bonded interface in XOR load balance mode (mode 2) with L3 and
> L4 headers
>     # vdev eth_bond0,mode=2,slave=0000:02:00.0,slave=0000:03:00.0,
> xmit_policy=l34
>     # vdev eth_bond1,mode=2,slave=0000:02:00.1,slave=0000:03:00.1,
> xmit_policy=l34
>     ##
>     ## Bonded interface in Active-Back up mode (mode 1)
>     # vdev eth_bond0,mode=1,slave=0000:02:00.0,slave=0000:03:00.0
>     # vdev eth_bond1,mode=1,slave=0000:02:00.1,slave=0000:03:00.1
>
>     ## Change UIO driver used by VPP, Options are: igb_uio, vfio-pci,
>     ## uio_pci_generic or auto (default)
>     # uio-driver vfio-pci
>
>     ## Disable mutli-segment buffers, improves performance but
>     ## disables Jumbo MTU support
>     # no-multi-seg
>
>     ## Increase number of buffers allocated, needed only in scenarios with
>     ## large number of interfaces and worker threads. Value is per CPU
> socket.
>     ## Default is 16384
>     # num-mbufs 128000
>         num-mbufs 4095
>     ## Change hugepages allocation per-socket, needed only if there is
> need for
>     ## larger number of mbufs. Default is 256M on each detected CPU socket
>     # socket-mem 2048,2048
>
>     ## Disables UDP / TCP TX checksum offload. Typically needed for use
>     ## faster vector PMDs (together with no-multi-seg)
>     # no-tx-checksum-offload
>  }
>
> # Adjusting the plugin path depending on where the VPP plugins are:
> #plugins
> #{
> #    path /home/bms/vpp/build-root/install-vpp-native/vpp/lib64/
> vpp_plugins
> #}
>
> # Alternate syntax to choose plugin path
> #plugin_path /home/bms/vpp/build-root/install-vpp-native/vpp/lib64/
> vpp_plugins
>
> On Thursday 1 February 2018, 7:00:18 PM IST, adarsh m via vpp-dev <
> vpp-dev@lists.fd.io> wrote:
>
>
>
>
>
> Support 2 channel 32core processor(ARM64)
>
>
>
>
>
> On Thursday 1 February 2018, 6:47:31 PM IST, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>
>
>
>
> You also added socket-mem which is pretty bad idea, try without.
>
> If that doesn't help then you will need to run VPP form console and
> possibly use gdb to collect more details.
>
>
>
> Which ARM board is that?
>
>
>
>
>
> On 1 Feb 2018, at 14:15, adarsh m <addi.ada...@yahoo.in> wrote:
>
>
>
> Hi,
>
>
>
> This is on ARM board. and yes i have modified the specific pcie address to
> added in startup.conf
>
>
>
>  dpdk {
>        socket-mem 1024
>        dev 0002:f9:00.0
>
> }
>
>
>
>
>
> On Thursday 1 February 2018, 5:20:59 PM IST, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>
>
>
>
> Please keep mailing list in CC.
>
>
>
> Those lines doesn't show that anything is wrong...
>
>
>
> Is this 4 socket computer? Have you modified startup.conf?
>
>
>
>
>
> On 1 Feb 2018, at 12:40, adarsh m <addi.ada...@yahoo.in> wrote:
>
>
>
> Hi,
>
>
>
> Very sorry pls check the complete one.
>
>
>
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> Feb 01 19:37:16 vasily vpp[43454]: /usr/bin/vpp[43454]: dpdk_config:1240:
> EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
> 0002:f9:00.0 --master-lcore 0 --socket-mem 1024,
> Feb 01 19:37:16 vasily /usr/bin/vpp[43454]: dpdk_config:1240: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w
> 0002:f9:00.0 --master-lcore 0 --socket-mem 1024,0,0,0
> Feb 01 19:37:16 vasily vpp[43454]: EAL: VFIO support initialized
> Feb 01 19:37:16 vasily vnet[43454]: EAL: VFIO support initialized
>
>
>
> On Thursday 1 February 2018, 4:48:35 PM IST, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>
>
>
>
>
>
> Unfortunately log you provided is incomplete and truncated so I cannot
> help much....
>
>
>
> On 1 Feb 2018, at 11:59, adarsh m <addi.ada...@yahoo.in> wrote:
>
>
>
> Hi,
>
>
>
> I checked hugepage and it was 0, so i freed this and increased it to 5120
>
>
>
> ubuntu@vasily:~$ sudo -i
> root@vasily:~# echo 5120 > /proc/sys/vm/nr_hugepages
>
>
>
> now the previous error is not occurring when i start but VPP is not
> stable, it'll become dead after a few sceonds from start.
>
>
>
> Logs :
>
> ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
> HugePages_Free:     5120
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$ sudo service vpp status
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: inactive (dead) since Thu 2018-02-01 18:50:46 CST; 5min ago
>   Process: 42736 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 42731 ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf
> (code=exited, status=0/SUCCESS)
>   Process: 42728 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 42726 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 42731 (code=exited, status=0/SUCCESS)
>
> Feb 01 18:50:46 vasily systemd[1]: vpp.service: Service hold-off time
> over, scheduling restart.
> Feb 01 18:50:46 vasily systemd[1]: Stopped vector packet processing engine.
> Feb 01 18:50:46 vasily systemd[1]: vpp.service: Start request repeated too
> quickly.
> Feb 01 18:50:46 vasily systemd[1]: Failed to start vector packet
> processing engine.
> Feb 01 18:56:12 vasily systemd[1]: Stopped vector packet processing engine.
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$ sudo service vpp start
> ubuntu@vasily:~$
> ubuntu@vasily:~$
> ubuntu@vasily:~$ sudo service vpp status
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: active (running) since Thu 2018-02-01 18:56:49 CST; 298ms ago
>   Process: 42857 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 42863 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 42860 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 42866 (vpp)
>    CGroup: /system.slice/vpp.service
>            └─42866 /usr/bin/vpp -c /etc/vpp/startup.conf
>
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.s
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plu
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> Feb 01 18:56:49 vasily vpp[42866]: /usr/bin/vpp[42866]: dpdk_config:1240:
> EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --f
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: dpdk_config:1240: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix v
> Feb 01 18:56:49 vasily vpp[42866]: EAL: VFIO support initialized
> Feb 01 18:56:49 vasily vnet[42866]: EAL: VFIO support initialized
> ...skipping...
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: active (running) since Thu 2018-02-01 18:56:49 CST; 298ms ago
>   Process: 42857 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 42863 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 42860 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 42866 (vpp)
>    CGroup: /system.slice/vpp.service
>            └─42866 /usr/bin/vpp -c /etc/vpp/startup.conf
>
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.s
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plu
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: load_one_plugin:63: Loaded
> plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> Feb 01 18:56:49 vasily vpp[42866]: /usr/bin/vpp[42866]: dpdk_config:1240:
> EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --f
> Feb 01 18:56:49 vasily /usr/bin/vpp[42866]: dpdk_config:1240: EAL init
> args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix v
> Feb 01 18:56:49 vasily vpp[42866]: EAL: VFIO support initialized
> Feb 01 18:56:49 vasily vnet[42866]: EAL: VFIO support initialized
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
> ~
>
> ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
> HugePages_Free:     2335
> ubuntu@vasily:~$ sudo service vpp status
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enabled)
>    Active: inactive (dead) since Thu 2018-02-01 18:56:56 CST; 63ms ago
>   Process: 42917 ExecStopPost=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>   Process: 42914 ExecStart=/usr/bin/vpp -c /etc/vpp/startup.conf
> (code=exited, status=0/SUCCESS)
>   Process: 42911 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status=0/SUCCESS)
>   Process: 42908 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/vpe-api (code=exited, status=0/SUCCESS)
>  Main PID: 42914 (code=exited, status=0/SUCCESS)
>
> Feb 01 18:56:56 vasily systemd[1]: vpp.service: Service hold-off time
> over, scheduling restart.
> Feb 01 18:56:56 vasily systemd[1]: Stopped vector packet processing engine.
> Feb 01 18:56:56 vasily systemd[1]: vpp.service: Start request repeated too
> quickly.
> Feb 01 18:56:56 vasily systemd[1]: Failed to start vector packet
> processing engine.
> ubuntu@vasily:~$ grep HugePages_Free /proc/meminfo
> HugePages_Free:     5120
> ubuntu@vasily:~$
>
>
>
> On Wednesday 31 January 2018, 9:30:19 PM IST, Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>
>
>
>
>
> On 31 Jan 2018, at 10:34, adarsh m via vpp-dev <vpp-dev@lists.fd.io>
> wrote:
>
>
>
> Hi,
>
>
>
> Pls check i am trying to bring up vpp with interface on ARM server but
> facing issue while doing so,
>
>
>
> pls let me know if there is any existing issue or method to correct this
> issue.
>
>
>
>
>
> ubuntu@vasily:~$ sudo service vpp status
> [sudo] password for ubuntu:
> ● vpp.service - vector packet processing engine
>    Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor
> preset: enab
>    Active: active (running) since Mon 2018-01-29 22:07:02 CST; 19h ago
>   Process: 2461 ExecStartPre=/sbin/modprobe uio_pci_generic (code=exited,
> status
>   Process: 2453 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm
> /dev/shm/
>  Main PID: 2472 (vpp_main)
>    CGroup: /system.slice/vpp.service
>            └─2472 /usr/bin/vpp -c /etc/vpp/startup.conf
>
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create
> dpdk_mbuf_
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING:
> Failed
> Jan 29 22:07:05 vasily vnet[2472]: unix_physmem_region_iommu_register:
> ioctl (VF
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create
> dpdk_mbuf_
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING:
> Failed
> Jan 29 22:07:05 vasily vnet[2472]: unix_physmem_region_iommu_register:
> ioctl (VF
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_pool_create: failed to create
> dpdk_mbuf_
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_buffer_pool_create:573: WARNING:
> Failed
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_ipsec_process:1011: not enough
> DPDK cryp
> Jan 29 22:07:05 vasily vnet[2472]: dpdk_lib_init:221: DPDK drivers found
> no port
>
>
>
>
>
> Looks like hugepages issue. can you show full log? What you pasted above
> is truncated...
>
>
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
> _______________________________________________
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
> 
>
>

Reply via email to