Hello Billy,

Thanks for your suggestion, It does solve my problem.

Here comes a further question, I did notice that this NIC card is allocated to 
numa one(my second numa node).
Hence, I setup socket-mem by trying following commands:
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="0,1024", or
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024, or
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,1024"
All these commands could not solve the problem.

Could you please tell me what difference between yours(512,5125) and 
mine(1024,1024) is?

Finally, thanks for your help again, when things go stable, I'll adjust 
pmd-cpu-mask for performance.

Best Regard

Tcnieh


-----Original Message-----
From: O Mahony, Billy [mailto:[email protected]] 
Sent: Tuesday, August 28, 2018 5:09 PM
To: [email protected]; [email protected]
Subject: RE: [ovs-discuss] Requested device cannot be used

Hi Tcnieh,

 

Looks like your nics are on NUMA1 (second numa node) – as their pci bus number 
is > 80.

 

But you have not told OvS to allocate hugepage memory on the second numa node – 
the 0 in “--socket-mem 1024,0).”

 

So you need to change your line to something like:

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="512,512"

 

to have Hugepages available on both nodes.

 

Also you have allocated just a single core (core 0) for DPDK PMDs. It is also 
unusual to allocate core zero. That should work but with reduced performance as 
the PMD (on NUMA0) will have to access the packet data on NUMA1.

 

Have a look at your cpu topology. And modify your core-mask to allocate a core 
from NUMA1 also.

 

The details are in the docs: Documentation/topics/dpdk/* and 
Documentation/howto/dpdk.rst.

 

Regards,

Billy

 

 

From: [email protected] 
[mailto:[email protected]] On Behalf Of ???
Sent: Tuesday, August 28, 2018 3:37 AM
To: [email protected]
Subject: [ovs-discuss] Requested device cannot be used

 

Hello all,
  I am trying to get the performance of intel x520 10G NIC over Dell R630/R730, 
but I keep getting an unexpected error, please see below.
 
I followed the instruction of https://goo.gl/T7iTuk <https://goo.gl/T7iTuk>  to 
compiler the DPDK and OVS code. I've successfully binded both my x520 NIC ports 
to DPDK, using either igb_uio or vfio_pci:
 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Network devices using DPDK-compatible driver 
============================================
0000:82:00.0 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=vfio-pci
0000:82:00.1 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=vfio-pci
 
Network devices using kernel driver
===================================
0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 
unused=igb_uio,vfio-pci
0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 
unused=igb_uio,vfio-pci
0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno3 drv=tg3 
unused=igb_uio,vfio-pci
0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno4 drv=tg3 
unused=igb_uio,vfio-pci *Active*
 
Other Network devices
=====================
<none>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
And the hugepage was set to 2048 * 2M
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
HugePages_Total:    2048
HugePages_Free:     1024
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
Here comes the problem, while I tried to init the ovsdb-server and ovs-vswitch, 
I got the following error:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   2018-08-27T09:54:05.548Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA 
node 0
   2018-08-27T09:54:05.548Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA 
node 1
   2018-08-27T09:54:05.548Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 
CPU cores
   
2018-08-27T09:54:05.548Z|00005|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connecting...
   2018-08-   
27T09:54:05.549Z|00006|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connected
   2018-08-27T09:54:05.552Z|00007|dpdk|INFO|DPDK Enabled - initializing...
   2018-08-27T09:54:05.552Z|00008|dpdk|INFO|No vhost-sock-dir provided - 
defaulting to /usr/local/var/run/openvswitch
   2018-08-27T09:54:05.552Z|00009|dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem 
1024,0 -c 0x00000001
   2018-08-27T09:54:05.553Z|00010|dpdk|INFO|EAL: Detected 32 lcore(s)
   2018-08-27T09:54:05.558Z|00011|dpdk|WARN|EAL: No free hugepages reported in 
hugepages-1048576kB
   2018-08-27T09:54:05.559Z|00012|dpdk|INFO|EAL: Probing VFIO support...
   2018-08-27T09:54:06.700Z|00013|dpdk|INFO|EAL: PCI device 0000:82:00.0 on 
NUMA socket 1
   2018-08-27T09:54:06.700Z|00014|dpdk|INFO|EAL:   probe driver: 8086:154d 
net_ixgbe
2018-08-27T09:54:06.700Z|00015|dpdk|ERR|EAL: Requested device 0000:82:00.0 
cannot be used
   2018-08-27T09:54:06.700Z|00016|dpdk|INFO|EAL: PCI device 0000:82:00.1 on 
NUMA socket 1
   2018-08-27T09:54:06.700Z|00017|dpdk|INFO|EAL:   probe driver: 8086:154d 
net_ixgbe
2018-08-27T09:54:06.700Z|00018|dpdk|ERR|EAL: Requested device 0000:82:00.1 
cannot be used
   2018-08-27T09:54:06.701Z|00019|dpdk|INFO|DPDK Enabled - initialized
   2018-08-27T09:54:06.705Z|00020|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports recirculation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
Therefore, I also got the same error when I added a dpdk-port:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2018-08-27T09:54:06.709Z|00036|dpdk|INFO|EAL: PCI device 0000:82:00.0 on NUMA 
socket 1
2018-08-27T09:54:06.709Z|00037|dpdk|INFO|EAL:   probe driver: 8086:154d 
net_ixgbe
2018-08-27T09:54:06.710Z|00038|dpdk|WARN|EAL: Requested device 0000:82:00.0 
cannot be used
2018-08-27T09:54:06.710Z|00039|dpdk|ERR|EAL: Driver cannot attach the device 
(0000:82:00.0) 2018-08-27T09:54:06.710Z|00040|netdev_dpdk|WARN|Error attaching 
device '0000:82:00.0' to DPDK
2018-08-27T09:54:06.710Z|00041|netdev|WARN|dpdk0: could not set configuration 
(Invalid argument) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 
I've tried a solution described in https://goo.gl/3opVRT 
<https://goo.gl/3opVRT> , which  is utilizing "uio_pci_generic" and disable 
intel_iommu. It didn't work to me.
 
Here is detail info about my test platform:
DPDK & OVS version: DPDK 16.11 & OVS 2.7.0, DPDK 17.05.1 & OVS 2.8.0, DPDK 
17.11 & OVS 2.9.0, DPDK 17.11 & OVS 2.10.0
OS: ubuntu 16.04
Hardware: Dell R730/R630 server  with intel X520 10G NIC 128G Memory, 32 Cores.
 
Can anybody help or give me a hint to debug? I'm totally loss here.

 

--

Sincerely,

Tcnieh


_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to