Thanks, Mark! You are absolutely right – lstopo showed the Intel NIC on NUMA node 1. I was able to start OVS-DPDK with the following option in /etc/default/openvswitch-switch DPDK_OPTS='--dpdk -c 0x1 -n 4 -m 0,2048 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664'
Now I have a DIFFERENT problem – no connectivity from VM to the Outside World via DPDK-OVS bridge. DPDK NIC Bindings root@caesar:/home/cisco# dpdk_nic_bind --status Network devices using DPDK-compatible driver ============================================ 0000:8f:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci unused=ixgbe <-- Intel X520-DA2 10 GE Port 0 0000:8f:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci unused=ixgbe <-- Intel X520-DA2 10GE Port 1 Network devices using kernel driver =================================== 0000:0f:00.0 'I350 Gigabit Network Connection' if=enp15s0f0 drv=igb unused=vfio-pci *Active* … root@caesar:/home/cisco# I have two OVS-DPDK Bridges created as described in https://help.ubuntu.com/16.04/serverguide/DPDK.html root@caesar:/home/cisco# ovs-vsctl show cf57d236-c8ec-4099-a621-8fda17920828 Bridge "ovsdpdkbr0" Port "dpdk0" Interface "dpdk0" <-- this should be my Intel X520 10GE Port 0 type: dpdk Port "ovsdpdkbr0" Interface "ovsdpdkbr0" type: internal Port "vhost-user-1" <-- this should be my VM vNIC0 Interface "vhost-user-1" type: dpdkvhostuser Bridge "ovsdpdkbr1" Port "ovsdpdkbr1" Interface "ovsdpdkbr1" type: internal Port "vhost-user-2" <-- this should be my VM vNIC1 Interface "vhost-user-2" type: dpdkvhostuser Port "dpdk1" <-- I hope, this is my Intel X520 10GE Port 1 Interface "dpdk1" type: dpdk ovs_version: "2.5.0" root@caesar:/home/cisco# In my VM I have the following XML config for the vNICs: <interface type='vhostuser'> <mac address='52:54:00:2e:4e:e0'/> <source type='unix' path='/var/run/openvswitch/vhost-user-1' mode='client'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </interface> <interface type='vhostuser'> <mac address='52:54:00:95:c5:4f'/> <source type='unix' path='/var/run/openvswitch/vhost-user-2' mode='client'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </interface> Verified, that both sockets exist: root@caesar:/home/cisco# ls -la /var/run/openvswitch/vhost-user-* srw-rw-r-- 1 libvirt-qemu kvm 0 May 27 15:45 /var/run/openvswitch/vhost-user-1 srw-rw-r-- 1 libvirt-qemu kvm 0 May 27 15:49 /var/run/openvswitch/vhost-user-2 root@caesar:/home/cisco# VM successfully starts, I can bring both Interfaces up, BUT I can NOT ping from my VM the outside world. VM can ping fine using ANOTHER interface: <interface type='network'> <mac address='52:54:00:34:86:9c'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> So, the problem is the connectivity via OVS-DPDK. Simple ping works. What is the best way to troubleshoot connectivity issues with OVS-DPDK? With native OVS I simple did “ifconfig vnet1” and immediately saw packets and drops. Now the interfaces like dpdk0 and vhost-user-1 are NOT visible for ifconfig. root@caesar:/home/cisco# ovs-dpctl show system@ovs-system: lookups: hit:0 missed:0 lost:0 flows: 0 masks: hit:0 total:1 hit/pkt:0.00 port 0: ovs-system (internal) root@caesar:/home/cisco# Thanks, Nikolai On 27.05.16, 11:09, "Kavanagh, Mark B" <mark.b.kavan...@intel.com> wrote: >> >>Hi! >> >>I try to install and use OVS with DPDK on Ubuntu 16.04 following this guide: >>https://help.ubuntu.com/16.04/serverguide/DPDK.html >> >>On a Cisco UCS C240 with two physical CPUs (18 Cores each) I have two Intel >>X520-DA2 >>Cards, which is recognized and show properly: >>root@caesar:/home/cisco# dpdk_nic_bind --status >>Network devices using DPDK-compatible driver >>============================================ >>0000:8f:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci >>unused=ixgbe <- looks good, vfio-pci driver shown properly >>0000:8f:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci >>unused=ixgbe <- looks good, vfio-pci driver shown properly >>Network devices using kernel driver >>=================================== >>0000:07:00.0 'VIC Ethernet NIC' if=enp7s0 drv=enic unused=vfio-pci >>0000:08:00.0 'VIC Ethernet NIC' if=enp8s0 drv=enic unused=vfio-pci >>0000:0f:00.0 'I350 Gigabit Network Connection' if=enp15s0f0 drv=igb >>unused=vfio-pci >>*Active* >>… >>Other network devices >>===================== >><none> >>root@caesar:/home/cisco# >> >>If I tweak the OVS Config as described in the Ubuntu DPDK Gude with the >>following line >> echo "DPDK_OPTS='--dpdk -c 0x1 -n 4 -m 2048 --vhost-owner libvirt-qemu:kvm >> --vhost-perm >>0664'" | sudo tee -a /etc/default/openvswitch-switch >>I will get the following error message: >>root@caesar:/home/cisco# ovs-vsctl show >>cf57d236-c8ec-4099-a621-8fda17920828 >> Bridge "ovsdpdkbr0" >> Port "dpdk0" >> Interface "dpdk0" >> type: dpdk >> error: "could not open network device dpdk0 (Cannot allocate >> memory)" >> Port "ovsdpdkbr0" >> Interface "ovsdpdkbr0" >> type: internal >> ovs_version: "2.5.0" >>root@caesar:/home/cisco# >> >>My UCS C240 Server has two nodes with 18 cores each. In the following forum >>http://comments.gmane.org/gmane.linux.network.openvswitch.general/6760 >>I saw similar issue and the solution was to configure memory like this: >>--- >>Start vswitchd process with 8GB on each numa node (if reserve memory on just >>1 numa node, >>creating dpdk port will fail: cannot allocate memory) >>./vswitchd/ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 8192,8192 -- >>unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach >>--- >> >>If I change /etc/default/openvswitch-switch to >> DPDK_OPTS='--dpdk -c 0x1 -n 4 --socket-mem 4096,4096 --vhost-owner >> libvirt-qemu:kvm -- >>vhost-perm 0664' >>then I can enter OVS CLI commandos, but have to use “Ctrl +C” to get prompt >>after any OVS >>CLI. But it looks, like OVS accepts and executes CLIs. >>I can create OVS DPDK bridges, but OVS cannot create a vhost_user socket at >>/var/run/openvswitch/vhost-user-1 – the following CLI does not work: >> >>cisco@caesar:~$ sudo ovs-vsctl add-port ovsdpdkbr1 vhost-user-1 -- set >>Interface vhost- >>user-1 type=dpdkvhostuser >>^C2016-05-26T17:11:16Z|00002|fatal_signal|WARN|terminating with signal 2 >>(Interrupt) >> >>cisco@caesar:~$ sudo ovs-vsctl show >>cf57d236-c8ec-4099-a621-8fda17920828 >> Bridge "ovsdpdkbr2" >> Port "ovsdpdkbr2" >> Interface "ovsdpdkbr2" >> type: internal >> Port "dpdk1" >> Interface "dpdk1" >> type: dpdk >> Bridge "ovsdpdkbr1" >> Port "vhost-user-1" >> Interface "vhost-user-1" >> type: dpdkvhostuser >> Port "ovsdpdkbr1" >> Interface "ovsdpdkbr1" >> type: internal >> Port "dpdk0" >> Interface "dpdk0" >> type: dpdk >> ovs_version: "2.5.0" >>cisco@caesar:~$ >> >>There is NO vhost-user-1 in /var/run/openvswitch/ >>cisco@caesar:~$ ls -la /var/run/openvswitch/ >>total 4 >>drwxr-xr-x 2 root root 100 May 26 11:51 . >>drwxr-xr-x 27 root root 1040 May 26 12:06 .. >>srwxr-x--- 1 root root 0 May 26 11:49 db.sock >>srwxr-x--- 1 root root 0 May 26 11:49 ovsdb-server.5559.ctl >>-rw-r--r-- 1 root root 5 May 26 11:49 ovsdb-server.pid >>cisco@caesar:~$ >>cisco@caesar:~$ >> >> >>So, my questions are: >>1. What is the right config line for servers with two physical CPU (in my >>case node0 and >>node1 with 18 CPUs each) for >>echo "DPDK_OPTS='--dpdk -c 0x1 -n 4 -m 2048 --vhost-owner libvirt-qemu:kvm >>--vhost-perm >>0664'" | sudo tee -a /etc/default/openvswitch-switch > >Hi Nikolai, > >You mentioned that when you specify the -m argument as '2048', you cannot add >dpdk0, but when you specify "-m 4096, 4096" (i.e. 4k for NUMA node 0, 4k for >NUMA node 1)that dpdk phy ports are added successfully. >This leads me to believe that your NICs are installed on the PCI slots for >NUMA node 1 - this is easily confirmed by use of the 'lstopo' tool, part of >the 'hwloc' package: https://www.open-mpi.org/projects/hwloc/. >To correct this, either move your NICs to the PCI slots for NUMA node 0, or >change your -m argument to "0, 2048". > >Hope this helps, >Mark > >> >>2. How can OVS create a vhost_user socket at >>/var/run/openvswitch/vhost-user-1 ? >> >> >> >>And yes, HugePage support is enabled: >>root@caesar:/home/cisco# cat /proc/meminfo | grep Huge >>AnonHugePages: 16384 kB >>HugePages_Total: 64 >>HugePages_Free: 0 >>HugePages_Rsvd: 0 >>HugePages_Surp: 0 >>Hugepagesize: 2048 kB >>root@caesar:/home/cisco# >> >>In /etc/default/grub I have: >>GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt intel_iommu=on hugepages=8192 >>hugepagesz=1G >>hugepages=8 isolcpus=4,5,6,7,8" >> >> >>Thanks, >>Nikolai _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss