Hi!

Originally I sent my question to dpdk-...@lists.01.org but got an autoreply and 
suggestion to use this alias.
I was not able to find a solution to my problem in the mailing archive.
My setup: RHEL 7.2 and OVS DPDK 2.4.0 – installed following RHEL HOWTO:
https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/single/configure-dpdk-for-openstack-networking/

DPDK detects two INTEL 10 Gig NICs, vfio-pci driver is loaded fine:
[root@caesar-rhel ~]# dpdk_nic_bind --status
Network devices using DPDK-compatible driver
============================================
0000:8f:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci 
unused=
0000:8f:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=vfio-pci 
unused=
…

OVS displays DPDK and DPDKvHostUser interfaces fine:
[root@caesar-rhel ~]# ovs-vsctl get Open_vSwitch . iface_types
[dpdk, dpdkr, dpdkvhostuser, geneve, gre, "gre64", internal, ipsec_gre, 
"ipsec_gre64", lisp, patch, stt, system, tap, vxlan]
[root@caesar-rhel ~]#

I can create a netdev bridge br0 and add dpdk0 interface to it, so far life is 
good. ☺
[root@caesar-rhel ~]# ovs-vsctl show
afceb04c-555c-4878-b75c-f55881fbe5ee
    Bridge "br0"
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port "br0"
            Interface "br0"
                type: internal
    ovs_version: "2.4.0"
[root@caesar-rhel ~]#

But I cannot create dpdkvhostuser using ovs-vsctl CLI – if I execute the 
following, it hangs:
[root@caesar-rhel ~]# ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface 
dpdkvhostuser0 type=dpdkvhostuser

If I execute Ctrl + C on that frozen CLI, it will give the following:
[root@caesar-rhel ~]# ovs-vsctl add-port br0 dpdkvhostuser0 -- set Interface 
dpdkvhostuser0 type=dpdkvhostuser

^C2016-10-13T23:32:54Z|00002|fatal_signal|WARN|terminating with signal 2 
(Interrupt)

[root@caesar-rhel ~]#

Strange, that I can see dpdkvhostuser in “ovs-vsctl show” output:
[root@caesar-rhel ~]# ovs-vsctl show
afceb04c-555c-4878-b75c-f55881fbe5ee
    Bridge "br0"
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
        Port "dpdkvhostuser0"
            Interface "dpdkvhostuser0"
                type: dpdkvhostuser
        Port "br0"
            Interface "br0"
                type: internal
    ovs_version: "2.4.0"
[root@caesar-rhel ~]#

But I do NOT see a socket dpdkvhostuser0  in the directory:
[root@caesar-rhel ~]# ls -la /var/run/openvswitch/
total 4
drwxrwxrwx.  2 root root  100 Oct 13  2016 .
drwxr-xr-x. 30 root root 1000 Oct 13 19:25 ..
srwx------.  1 root qemu    0 Oct 13  2016 db.sock
srwx------.  1 root qemu    0 Oct 13  2016 ovsdb-server.1492.ctl
-rw-r--r--.  1 root qemu    5 Oct 13  2016 ovsdb-server.pid
[root@caesar-rhel ~]#

I run qemu as NON root:
[root@caesar-rhel ~]# grep "user =" /etc/libvirt/qemu.conf
#       user = "qemu"   # A user named "qemu"
#       user = "+0"     # Super user (uid=0)
#       user = "100"    # A user named "100" or a user with uid=100
#user = "root"
[root@caesar-rhel ~]#
[root@caesar-rhel ~]# grep "root" /etc/libvirt/qemu.conf
#user = "root"
#group = "root"
# Minimum must be greater than 0, however when QEMU is not running as root,
[root@caesar-rhel ~]#

Question: why I cannot create dpdkvhostuser0 and why CLI “ovs-vsctl add-port 
br0 dpdkvhostuser0 -- set Interface dpdkvhostuser0 type=dpdkvhostuser” hangs?

One strange thing is Hugepages – I defined 1 GB, but I see all as free:
[root@caesar-rhel ~]# grep HugePages_ /proc/meminfo
HugePages_Total:     248
HugePages_Free:      248
HugePages_Rsvd:        0
HugePages_Surp:        0
[root@caesar-rhel ~]#
I would expect, that 2x8GB will be used by OVS! So I should see 232 free pages, 
not 248.

Openvswtch config:
[root@caesar-rhel ~]# more /etc/sysconfig/openvswitch
#
#OPTIONS=""
DPDK_OPTIONS="--dpdk -c 0x1 -n 4 --socket-mem 8192,8192"
[root@caesar-rhel ~]#

I am looking at the right folder – defined an the beginning with
ovs-vsctl --no-wait set Open_vSwitch . 
other_config:vhost-sock-dir=/var/run/openvswitch/

My bridge is netdev:
[root@caesar-rhel ~]# ovs-vsctl list bridge br0
_uuid               : 8c0f7a30-14b4-4a9f-8182-4e1d0c84ae6d
auto_attach         : []
controller          : []
datapath_id         : []
datapath_type       : netdev
datapath_version    : "<built-in>"
external_ids        : {}
fail_mode           : []
flood_vlans         : []
flow_tables         : {}
ipfix               : []
mcast_snooping_enable: false
mirrors             : []
name                : "br0"
netflow             : []
other_config        : {}
ports               : [466b9b80-8542-408d-8365-2da87186da0a, 
488c60eb-40eb-499a-a950-d72ef6309acc, 7f39579b-0847-4965-ac9e-2b545998a570]
protocols           : []
rstp_enable         : false
rstp_status         : {}
sflow               : []
status              : {}
stp_enable          : false
[root@caesar-rhel ~]#

OVS-DPDK rpm installed:
[root@caesar-rhel ~]# yum info openvswitch-dpdk
Loaded plugins: langpacks, product-id, search-disabled-repos, 
subscription-manager
Installed Packages
Name        : openvswitch-dpdk
Arch        : x86_64
Version     : 2.4.0
Release     : 0.10346.git97bab959.3.el7_2
Size        : 11 M
Repo        : installed
From repo   : rhel-7-server-openstack-8-rpms
Summary     : Open vSwitch
URL         : http://www.openvswitch.org/
License     : ASL 2.0 and LGPLv2+ and SISSL
Description : Open vSwitch provides standard network bridging functions and
            : support for the OpenFlow protocol for remote per-flow control of
            : traffic.

[root@caesar-rhel ~]#

I have two NUMA sockets:
[root@caesar-rhel ~]# lstopo-no-graphics
Machine (256GB)
  NUMANode L#0 (P#0 128GB)
    Socket L#0 + L3 L#0 (45MB)
      L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
      L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1)
      L2 L#2 (256KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#2)
      L2 L#3 (256KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#3)
      L2 L#4 (256KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#4)
      L2 L#5 (256KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#5)
      L2 L#6 (256KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#6)
      L2 L#7 (256KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#7)
      L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#8)
      L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#9)
      L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 
(P#10)
      L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 
(P#11)
      L2 L#12 (256KB) + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 
(P#12)
      L2 L#13 (256KB) + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 
(P#13)
      L2 L#14 (256KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 
(P#14)
      L2 L#15 (256KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 
(P#15)
      L2 L#16 (256KB) + L1d L#16 (32KB) + L1i L#16 (32KB) + Core L#16 + PU L#16 
(P#16)
      L2 L#17 (256KB) + L1d L#17 (32KB) + L1i L#17 (32KB) + Core L#17 + PU L#17 
(P#17)
    HostBridge L#0
      PCIBridge
        PCIBridge
          PCIBridge
            PCIBridge
              PCIBridge
                PCI 1137:0043
                  Net L#0 "enp7s0"
              PCIBridge
                PCI 1137:0043
                  Net L#1 "enp8s0"
      PCIBridge
        PCI 1000:005d
          Block L#2 "sda"
      PCI 8086:8d62
      PCIBridge
        PCI 102b:0522
          GPU L#3 "card0"
          GPU L#4 "controlD64"
      PCIBridge
        PCI 8086:1521
          Net L#5 "enp15s0f0"
        PCI 8086:1521
          Net L#6 "enp15s0f1"
      PCI 8086:8d02
  NUMANode L#1 (P#1 128GB)
    Socket L#1 + L3 L#1 (45MB)
      L2 L#18 (256KB) + L1d L#18 (32KB) + L1i L#18 (32KB) + Core L#18 + PU L#18 
(P#18)
      L2 L#19 (256KB) + L1d L#19 (32KB) + L1i L#19 (32KB) + Core L#19 + PU L#19 
(P#19)
      L2 L#20 (256KB) + L1d L#20 (32KB) + L1i L#20 (32KB) + Core L#20 + PU L#20 
(P#20)
      L2 L#21 (256KB) + L1d L#21 (32KB) + L1i L#21 (32KB) + Core L#21 + PU L#21 
(P#21)
      L2 L#22 (256KB) + L1d L#22 (32KB) + L1i L#22 (32KB) + Core L#22 + PU L#22 
(P#22)
      L2 L#23 (256KB) + L1d L#23 (32KB) + L1i L#23 (32KB) + Core L#23 + PU L#23 
(P#23)
      L2 L#24 (256KB) + L1d L#24 (32KB) + L1i L#24 (32KB) + Core L#24 + PU L#24 
(P#24)
      L2 L#25 (256KB) + L1d L#25 (32KB) + L1i L#25 (32KB) + Core L#25 + PU L#25 
(P#25)
      L2 L#26 (256KB) + L1d L#26 (32KB) + L1i L#26 (32KB) + Core L#26 + PU L#26 
(P#26)
      L2 L#27 (256KB) + L1d L#27 (32KB) + L1i L#27 (32KB) + Core L#27 + PU L#27 
(P#27)
      L2 L#28 (256KB) + L1d L#28 (32KB) + L1i L#28 (32KB) + Core L#28 + PU L#28 
(P#28)
      L2 L#29 (256KB) + L1d L#29 (32KB) + L1i L#29 (32KB) + Core L#29 + PU L#29 
(P#29)
      L2 L#30 (256KB) + L1d L#30 (32KB) + L1i L#30 (32KB) + Core L#30 + PU L#30 
(P#30)
      L2 L#31 (256KB) + L1d L#31 (32KB) + L1i L#31 (32KB) + Core L#31 + PU L#31 
(P#31)
      L2 L#32 (256KB) + L1d L#32 (32KB) + L1i L#32 (32KB) + Core L#32 + PU L#32 
(P#32)
      L2 L#33 (256KB) + L1d L#33 (32KB) + L1i L#33 (32KB) + Core L#33 + PU L#33 
(P#33)
      L2 L#34 (256KB) + L1d L#34 (32KB) + L1i L#34 (32KB) + Core L#34 + PU L#34 
(P#34)
      L2 L#35 (256KB) + L1d L#35 (32KB) + L1i L#35 (32KB) + Core L#35 + PU L#35 
(P#35)
    HostBridge L#10
      PCIBridge
        PCI 8086:1521
          Net L#7 "enp132s0f0"
        PCI 8086:1521
          Net L#8 "enp132s0f1"
        PCI 8086:1521
          Net L#9 "enp132s0f2"
        PCI 8086:1521
          Net L#10 "enp132s0f3"
      PCIBridge
        PCIBridge
          PCIBridge
            PCIBridge
              PCIBridge
                PCI 1137:0043
                  Net L#11 "enp139s0"
              PCIBridge
                PCI 1137:0043
                  Net L#12 "enp140s0"
      PCIBridge
        PCI 8086:10fb
        PCI 8086:10fb
[root@caesar-rhel ~]#


Thanks,
Nikolai
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to