Tweaking DPDK_OPTIONS in /etc/sysconfig/openvswitch and reinstall fixed the 
issue. ☺

Here is my documentation of configuration steps, which worked for me under RHEL 
7.2 with OVS-DPDK  2.4.0

Start with 
https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/single/configure-dpdk-for-openstack-networking/
Make sure, that you reserved memory on needed NUMA node:
nano /etc/sysconfig/openvswitch
#OPTIONS=""
DPDK_OPTIONS="-l 1,18 -n 1 --socket-mem 8192,8192"

In the config above, we reserved 8Gig on NUMA socket 0 and 8 Gig on NUMA socket 
1. Means, you should see 16 GB in use by OVS-DPDK:
[root@caesar-rhel ~]# grep HugePages_ /proc/meminfo
HugePages_Total:     248
HugePages_Free:      232
HugePages_Rsvd:        0
HugePages_Surp:        0
[root@caesar-rhel ~]#

Check threads:
# Check current stats
   ovs-appctl dpif-netdev/pmd-stats-show
# Clear previous stats
   ovs-appctl dpif-netdev/pmd-stats-clear

[root@caesar-rhel ~]# ovs-appctl dpif-netdev/pmd-stats-show
main thread:
        emc hits:0
        megaflow hits:0
        miss:0
        lost:0
        polling cycles:25378696 (100.00%)
        processing cycles:0 (0.00%)
pmd thread numa_id 1 core_id 18:
        emc hits:73693036
        megaflow hits:61
        miss:44
        lost:0
        polling cycles:1205343674907 (88.04%)
        processing cycles:163781555220 (11.96%)
        avg cycles per packet: 18578.73 (1369125230127/73693141)
        avg processing cycles per packet: 2222.48 (163781555220/73693141)
pmd thread numa_id 0 core_id 0:
        emc hits:5751
        megaflow hits:4
        miss:12
        lost:0
        polling cycles:1472440406225 (100.00%)
        processing cycles:19629399 (0.00%)
        avg cycles per packet: 255325131.89 (1472460035624/5767)
        avg processing cycles per packet: 3403.75 (19629399/5767)
[root@caesar-rhel ~]#

If needed, you can tweak folder for the socket creation:
ovs-vsctl --no-wait set Open_vSwitch . 
other_config:vhost-sock-dir=/var/run/openvswitch/

systemctl restart openvswitch.service

Define your OVS bridges and ports as described here: Then switch to 
https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md#ovstc for 
vhostuser socket creation.
ovs-vsctl add-br bridge0 -- set bridge bridge0 datapath_type=netdev
ovs-vsctl add-port bridge0 dpdk0 -- set Interface dpdk0 type=dpdk
ovs-vsctl add-port bridge0 dpdk1 -- set Interface dpdk1 type=dpdk
ovs-vsctl add-port bridge0 vhost-user-1 -- set Interface vhost-user-1 
type=dpdkvhostuser
[root@caesar-rhel ~]# ovs-vsctl show
afceb04c-555c-4878-b75c-f55881fbe5ee
    Bridge "bridge0"
        Port "vhost-user-2"
            Interface "vhost-user-2"
                type: dpdkvhostuser
        Port "dpdk1"
            Interface "dpdk1"
                type: dpdk
        Port "vhost-user-1"
            Interface "vhost-user-1"
                type: dpdkvhostuser
        Port "bridge0"
            Interface "bridge0"
                type: internal
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
    ovs_version: "2.4.0"
[root@caesar-rhel ~]#

Check Socket Location:
[root@caesar-rhel ~]# ls -la /var/run/openvswitch/
total 8
drwxr-xr-x.  2 root root  220 Oct 13 20:34 .
drwxr-xr-x. 30 root root 1060 Oct 13 20:17 ..
srwx------.  1 root root    0 Oct 13 20:20 bridge0.mgmt
srwx------.  1 root root    0 Oct 13 20:20 bridge0.snoop
srwx------.  1 root root    0 Oct 13  2016 db.sock
srwx------.  1 root root    0 Oct 13  2016 ovsdb-server.1503.ctl
-rw-r--r--.  1 root root    5 Oct 13  2016 ovsdb-server.pid
srwx------.  1 root root    0 Oct 13 20:17 ovs-vswitchd.4601.ctl
-rw-r--r--.  1 root root    5 Oct 13 20:17 ovs-vswitchd.pid
srwxr-xr-x.  1 root root    0 Oct 13 20:25 vhost-user-1
srwxr-xr-x.  1 root root    0 Oct 13 20:26 vhost-user-2
[root@caesar-rhel ~]#

Access rights: https://lists.01.org/pipermail/dpdk-ovs/2015-July/002235.html
When starting openvsiwtch you need to run it with the qemu group
  sudo su -g "qemu" -c "umask 002; /path/to/ovs-vswitchd ...."
This will chage to sockets permission to allow read write access to the qemu 
group.
Alternatively you can run qemu as root by adjusting the user and group to root 
in /etc/libvirt/qemu.conf
adjusting the group of ovs is the more secure solution.

I run quemu as root:
[root@augustus-rhel ~]# grep "root" /etc/libvirt/qemu.conf
user = "root"
group = "root"
# Minimum must be greater than 0, however when QEMU is not running as root,
[root@augustus-rhel ~]#


Prepare XML file for the VM. OVS must be server in 2.4.0: 
http://openvswitch.org/pipermail/dev/2016-July/074971.html
A new other_config DB option has been added called 'vhost_driver_mode'.
By default this is set to 'server' which is the mode of operation OVS
with DPDK has used up until this point - whereby OVS creates and manages vHost 
user sockets.
If set to 'client', OVS will act as the vHost client and connect to
sockets created and managed by QEMU which acts as the server. This mode
allows for reconnect capability, which allows vHost ports to resume
normal connectivity in event of switch reset.
QEMU v2.7.0+ is required when using OVS in client mode and QEMU in server mode.

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
  <seclabel type='none' model='none'/>
  <qemu:commandline>
    <qemu:arg value='-numa'/>
    <qemu:arg value='node,memdev=mem'/>
    <qemu:arg value='-mem-prealloc'/>
    <qemu:arg value='-object'/>
    <qemu:arg 
value='memory-backend-file,id=mem,size=4G,mem-path=/dev/hugepages,share=on,share=on'/>
    <qemu:arg value='-netdev'/>
    <qemu:arg value='vhost-user,id=hostnet1,chardev=vhost-user-1,vhostforce'/>
    <qemu:arg value='-device'/>
    <qemu:arg 
value='virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:00:01:01,mrg_rxbuf=on'/>
    <qemu:arg value='-chardev'/>
    <qemu:arg 
value='socket,id=vhost-user-1,path=/var/run/openvswitch/vhost-user-1'/>
    <qemu:arg value='-netdev'/>
    <qemu:arg value='vhost-user,id=hostnet2,chardev=vhost-user-2,vhostforce'/>
    <qemu:arg value='-device'/>
    <qemu:arg 
value='virtio-net-pci,netdev=hostnet2,id=net2,mac=52:54:00:00:01:02,mrg_rxbuf=on'/>
    <qemu:arg value='-chardev'/>
    <qemu:arg 
value='socket,id=vhost-user-2,path=/var/run/openvswitch/vhost-user-2'/>
  </qemu:commandline>



To better scale the work loads across cores, Multiple pmd threads can be 
created and pinned to CPU cores by explicity specifying pmd-cpu-mask.
eg: To spawn 2 pmd threads and pin them to cores 1, 2
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=6




https://bugzilla.redhat.com/show_bug.cgi?id=1359856
The summary says "-l 2,8" which means CPU#2 and CPU#8, so the mask is actually:
>>> print "%x" % ((1 << 2) | (1<< 8))
104

Therefore the correct command would be:
# ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=104

Flavio explained the different between the values and their usages, here's the 
conclusions,
-l option goes directly from OVS to DPDK, so OVS doesn't know anything about 
--dpdk -l or -c ,what matters to OVS is other_config:pmd-cpu-mask
So we need to specify either -l or -c for DPDK and pmd-cpu-mask for OVS

In OVS 2.6 hopefully there will be only one parameter.




Best regards,
Nikolai

From: "Nikolai Pitaev (npitaev)" <npit...@cisco.com>
Date: Friday, 14 October 2016 at 02:00
To: "discuss@openvswitch.org" <discuss@openvswitch.org>
Subject: cannot create dpdkvhostuser on 2.4.0 under RHEL7.2 - CLI ovs-vsctl 
add-port hangs

Hi!

Originally I sent my question to dpdk-...@lists.01.org but got an autoreply and 
suggestion to use this alias.
I was not able to find a solution to my problem in the mailing archive.
My setup: RHEL 7.2 and OVS DPDK 2.4.0 – installed following RHEL HOWTO:
https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/single/configure-dpdk-for-openstack-networking/

DPDK detects two INTEL 10 Gig NICs, vfio-pci driver is loaded fine:
[root@caesar-rhel ~]# dpdk_nic_bind --status
Network devices using DPDK-compatible driver
============================================
0000:8f:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci 
unused=
0000:8f:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci 
unused=
…

OVS displays DPDK and DPDKvHostUser interfaces fine:
[root@caesar-rhel ~]# ovs-vsctl get Open_vSwitch . iface_types
[dpdk, dpdkr, dpdkvhostuser, geneve, gre, "gre64", internal, ipsec_gre, 
"ipsec_gre64", lisp, patch, stt, system, tap, vxlan]
[root@caesar-rhel ~]#

I can create a netdev bridge br0 and add dpdk0 interface to it, so far life is 
good. ☺
[root@caesar-rhel ~]# ovs-vsctl show
afceb04c-555c-4878-b75c-f55881fbe5ee
    Bridge "br0"
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port "br0"
            Interface "br0"
                type: internal
    ovs_version: "2.4.0"
[root@caesar-rhel ~]#

But I cannot create dpdkvhostuser using ovs-vsctl CLI – if I execute the 
following, it hangs:
[root@caesar-rhel ~]# ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface 
dpdkvhostuser0 type=dpdkvhostuser

If I execute Ctrl + C on that frozen CLI, it will give the following:
[root@caesar-rhel ~]# ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface 
dpdkvhostuser0 type=dpdkvhostuser

^C2016-10-13T23:32:54Z|00002|fatal_signal|WARN|terminating with signal 2 
(Interrupt)

[root@caesar-rhel ~]#

Strange, that I can see dpdkvhostuser in “ovs-vsctl show” output:
[root@caesar-rhel ~]# ovs-vsctl show
afceb04c-555c-4878-b75c-f55881fbe5ee
    Bridge "br0"
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port "dpdkvhostuser0"
            Interface "dpdkvhostuser0"
                type: dpdkvhostuser
        Port "br0"
            Interface "br0"
                type: internal
    ovs_version: "2.4.0"
[root@caesar-rhel ~]#

But I do NOT see a socket dpdkvhostuser0  in the directory:
[root@caesar-rhel ~]# ls -la /var/run/openvswitch/
total 4
drwxrwxrwx.  2 root root  100 Oct 13  2016 .
drwxr-xr-x. 30 root root 1000 Oct 13 19:25 ..
srwx------.  1 root qemu    0 Oct 13  2016 db.sock
srwx------.  1 root qemu    0 Oct 13  2016 ovsdb-server.1492.ctl
-rw-r--r--.  1 root qemu    5 Oct 13  2016 ovsdb-server.pid
[root@caesar-rhel ~]#

I run qemu as NON root:
[root@caesar-rhel ~]# grep "user =" /etc/libvirt/qemu.conf
#       user = "qemu"   # A user named "qemu"
#       user = "+0"     # Super user (uid=0)
#       user = "100"    # A user named "100" or a user with uid=100
#user = "root"
[root@caesar-rhel ~]#
[root@caesar-rhel ~]# grep "root" /etc/libvirt/qemu.conf
#user = "root"
#group = "root"
# Minimum must be greater than 0, however when QEMU is not running as root,
[root@caesar-rhel ~]#

Question: why I cannot create dpdkvhostuser0 and why CLI “ovs-vsctl add-port 
br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser” hangs?

One strange thing is Hugepages – I defined 1 GB, but I see all as free:
[root@caesar-rhel ~]# grep HugePages_ /proc/meminfo
HugePages_Total:     248
HugePages_Free:      248
HugePages_Rsvd:        0
HugePages_Surp:        0
[root@caesar-rhel ~]#
I would expect, that 2x8GB will be used by OVS! So I should see 232 free pages, 
not 248.

Openvswtch config:
[root@caesar-rhel ~]# more /etc/sysconfig/openvswitch
#
#OPTIONS=""
DPDK_OPTIONS="--dpdk -c 0x1 -n 4 --socket-mem 8192,8192"
[root@caesar-rhel ~]#

I am looking at the right folder – defined an the beginning with
ovs-vsctl --no-wait set Open_vSwitch . 
other_config:vhost-sock-dir=/var/run/openvswitch/

My bridge is netdev:
[root@caesar-rhel ~]# ovs-vsctl list bridge br0
_uuid               : 8c0f7a30-14b4-4a9f-8182-4e1d0c84ae6d
auto_attach         : []
controller          : []
datapath_id         : []
datapath_type       : netdev
datapath_version    : "<built-in>"
external_ids        : {}
fail_mode           : []
flood_vlans         : []
flow_tables         : {}
ipfix               : []
mcast_snooping_enable: false
mirrors             : []
name                : "br0"
netflow             : []
other_config        : {}
ports               : [466b9b80-8542-408d-8365-2da87186da0a, 
488c60eb-40eb-499a-a950-d72ef6309acc, 7f39579b-0847-4965-ac9e-2b545998a570]
protocols           : []
rstp_enable         : false
rstp_status         : {}
sflow               : []
status              : {}
stp_enable          : false
[root@caesar-rhel ~]#

OVS-DPDK rpm installed:
[root@caesar-rhel ~]# yum info openvswitch-dpdk
Loaded plugins: langpacks, product-id, search-disabled-repos, 
subscription-manager
Installed Packages
Name        : openvswitch-dpdk
Arch        : x86_64
Version     : 2.4.0
Release     : 0.10346.git97bab959.3.el7_2
Size        : 11 M
Repo        : installed
From repo   : rhel-7-server-openstack-8-rpms
Summary     : Open vSwitch
URL         : http://www.openvswitch.org/
License     : ASL 2.0 and LGPLv2+ and SISSL
Description : Open vSwitch provides standard network bridging functions and
            : support for the OpenFlow protocol for remote per-flow control of
            : traffic.

[root@caesar-rhel ~]#

I have two NUMA sockets:
[root@caesar-rhel ~]# lstopo-no-graphics
Machine (256GB)
  NUMANode L#0 (P#0 128GB)
    Socket L#0 + L3 L#0 (45MB)
      L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
      L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1)
      L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#2)
      L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#3)
      L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#4)
      L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#5)
      L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#6)
      L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#7)
      L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#8)
      L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#9)
      L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 
(P#10)
      L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 
(P#11)
      L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 
(P#12)
      L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 
(P#13)
      L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 
(P#14)
      L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 
(P#15)
      L2 L#16 (256KB) + L1d L#16 (32KB) + L1i L#16 (32KB) + Core L#16 + PU L#16 
(P#16)
      L2 L#17 (256KB) + L1d L#17 (32KB) + L1i L#17 (32KB) + Core L#17 + PU L#17 
(P#17)
    HostBridge L#0
      PCIBridge
        PCIBridge
          PCIBridge
            PCIBridge
              PCIBridge
                PCI 1137:0043
                  Net L#0 "enp7s0"
              PCIBridge
                PCI 1137:0043
                  Net L#1 "enp8s0"
      PCIBridge
        PCI 1000:005d
          Block L#2 "sda"
      PCI 8086:8d62
      PCIBridge
        PCI 102b:0522
          GPU L#3 "card0"
          GPU L#4 "controlD64"
      PCIBridge
        PCI 8086:1521
          Net L#5 "enp15s0f0"
        PCI 8086:1521
          Net L#6 "enp15s0f1"
      PCI 8086:8d02
  NUMANode L#1 (P#1 128GB)
    Socket L#1 + L3 L#1 (45MB)
      L2 L#18 (256KB) + L1d L#18 (32KB) + L1i L#18 (32KB) + Core L#18 + PU L#18 
(P#18)
      L2 L#19 (256KB) + L1d L#19 (32KB) + L1i L#19 (32KB) + Core L#19 + PU L#19 
(P#19)
      L2 L#20 (256KB) + L1d L#20 (32KB) + L1i L#20 (32KB) + Core L#20 + PU L#20 
(P#20)
      L2 L#21 (256KB) + L1d L#21 (32KB) + L1i L#21 (32KB) + Core L#21 + PU L#21 
(P#21)
      L2 L#22 (256KB) + L1d L#22 (32KB) + L1i L#22 (32KB) + Core L#22 + PU L#22 
(P#22)
      L2 L#23 (256KB) + L1d L#23 (32KB) + L1i L#23 (32KB) + Core L#23 + PU L#23 
(P#23)
      L2 L#24 (256KB) + L1d L#24 (32KB) + L1i L#24 (32KB) + Core L#24 + PU L#24 
(P#24)
      L2 L#25 (256KB) + L1d L#25 (32KB) + L1i L#25 (32KB) + Core L#25 + PU L#25 
(P#25)
      L2 L#26 (256KB) + L1d L#26 (32KB) + L1i L#26 (32KB) + Core L#26 + PU L#26 
(P#26)
      L2 L#27 (256KB) + L1d L#27 (32KB) + L1i L#27 (32KB) + Core L#27 + PU L#27 
(P#27)
      L2 L#28 (256KB) + L1d L#28 (32KB) + L1i L#28 (32KB) + Core L#28 + PU L#28 
(P#28)
      L2 L#29 (256KB) + L1d L#29 (32KB) + L1i L#29 (32KB) + Core L#29 + PU L#29 
(P#29)
      L2 L#30 (256KB) + L1d L#30 (32KB) + L1i L#30 (32KB) + Core L#30 + PU L#30 
(P#30)
      L2 L#31 (256KB) + L1d L#31 (32KB) + L1i L#31 (32KB) + Core L#31 + PU L#31 
(P#31)
      L2 L#32 (256KB) + L1d L#32 (32KB) + L1i L#32 (32KB) + Core L#32 + PU L#32 
(P#32)
      L2 L#33 (256KB) + L1d L#33 (32KB) + L1i L#33 (32KB) + Core L#33 + PU L#33 
(P#33)
      L2 L#34 (256KB) + L1d L#34 (32KB) + L1i L#34 (32KB) + Core L#34 + PU L#34 
(P#34)
      L2 L#35 (256KB) + L1d L#35 (32KB) + L1i L#35 (32KB) + Core L#35 + PU L#35 
(P#35)
    HostBridge L#10
      PCIBridge
        PCI 8086:1521
          Net L#7 "enp132s0f0"
        PCI 8086:1521
          Net L#8 "enp132s0f1"
        PCI 8086:1521
          Net L#9 "enp132s0f2"
        PCI 8086:1521
          Net L#10 "enp132s0f3"
      PCIBridge
        PCIBridge
          PCIBridge
            PCIBridge
              PCIBridge
                PCI 1137:0043
                  Net L#11 "enp139s0"
              PCIBridge
                PCI 1137:0043
                  Net L#12 "enp140s0"
      PCIBridge
        PCI 8086:10fb
        PCI 8086:10fb
[root@caesar-rhel ~]#


Thanks,
Nikolai
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to