Hi Team, I am trying to run dpdk apps inside docker containers. These containers have vhost-user interfaces connected to VPP.
The memory regions in the VPP associated with the vhost-user are 0, which might occur if the VHOST_USER_SET_MEM_TABLE message has not reached VPP. Any ideas on how to debug the issue from the dpdk end ? Is there any way to figure out if the virtio protocol handshake has been initiated ? Regards, Shiv VPP status: ---------------- vpp# show vhost-user .. Interface: VirtualEthernet0/0/0 (ifindex 2) virtio_net_hdr_sz 12 features mask (0xffffffff): features (0x10008000): VIRTIO_NET_F_MRG_RXBUF (15) VIRTIO_F_INDIRECT_DESC (28) protocol features (0x0) socket filename /tmp/sock2.sock type server errno "Success" rx placement: tx placement: spin-lock thread 0 on vring 0 Memory regions (total 0) Interface: VirtualEthernet0/0/1 (ifindex 3) virtio_net_hdr_sz 12 features mask (0xffffffff): features (0x10008000): VIRTIO_NET_F_MRG_RXBUF (15) VIRTIO_F_INDIRECT_DESC (28) protocol features (0x0) socket filename /tmp/sock1.sock type server errno "Success" rx placement: tx placement: spin-lock thread 0 on vring 0 Memory regions (total 0) DPDK container configuration ----------------------------------------- I have compiled the dpdk app, and launching the docker with the below command. - sudo docker run -it -v /tmp/sock1.sock:/var/run/usvhost1 -v /tmp/sock2.sock:/var/run/usvhost2 -v /dev/hugepages/:/dev/hugepages dpdk-app-l2fwd - Subsequently, I launch the dpdk app.. ./bin/testpmd -l 16-17 -n 4 --log-level=8 --socket-mem=1024,1024 --no-pci --vdev=virtio_user0,path=/var/run/usvhost1,mac=00:00:00:01:01:01 --vdev=virtio_user1,path=/var/run/usvhost2,mac=00:00:00:01:01:02 -- -i --txqflags=0xf00 --disable-hw-vlan EAL: Master lcore 16 is ready (tid=4a1488c0;cpuset=[16]) EAL: lcore 17 is ready (tid=48824700;cpuset=[17]) EAL: Search driver virtio_user0 to probe device virtio_user0 EAL: Search driver virtio_user1 to probe device virtio_user1 Interactive-mode selected Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa. USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0 USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=155456, size=2176, socket=1 Configuring Port 0 (socket 0) Port 0: 00:00:00:01:01:01 Configuring Port 1 (socket 0) Port 1: 00:00:00:01:01:02 Checking link statuses... Done testpmd> testpmd> show port info all ********************* Infos for port 0 ********************* MAC address: 00:00:00:01:01:01 Driver name: net_virtio_user Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 10000 Mbps Link duplex: full-duplex MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 64 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off filter off qinq(extend) off No flow type is supported. Max possible RX queues: 1 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Max possible TX queues: 1 Max possible number of TXDs per queue: 65535 Min possible number of TXDs per queue: 0 TXDs number alignment: 1 ********************* Infos for port 1 ********************* MAC address: 00:00:00:01:01:02 Driver name: net_virtio_user Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 10000 Mbps Link duplex: full-duplex MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 64 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off filter off qinq(extend) off No flow type is supported. Max possible RX queues: 1 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Max possible TX queues: 1 Max possible number of TXDs per queue: 65535 Min possible number of TXDs per queue: 0 TXDs number alignment: 1 testpmd> ----
