Hi Folks,
I am trying to run dpdk-testpmd on a Dell PowerEdge R730 server with
Mellanox ConnectX-4 Lx NIC card. I can bind the vfio-pci driver:
root@dut3:~# dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
0000:05:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci
unused=mlx5_core,uio_pci_generic
0000:05:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci
unused=mlx5_core,uio_pci_generic
But EAL gives the following messages:
root@dut3:~# dpdk-testpmd
EAL: Detected 16 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.0 (socket 0)
mlx5_pci: no Verbs device matches PCI device 0000:05:00.0, are kernel
drivers loaded?
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:05:00.0 cannot be used
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.1 (socket 0)
mlx5_pci: no Verbs device matches PCI device 0000:05:00.1, are kernel
drivers loaded?
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:05:00.1 cannot be used
EAL: Probe PCI driver: net_mlx4 (15b3:1007) device: 0000:82:00.0 (socket 1)
testpmd: create a new mbuf pool <mb_pool_0>: n=235456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=235456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
net_mlx4: 0x55e85e1b4b40: cannot attach flow rules (code 95, "Operation
not supported"), flow error type 2, cause 0x220040d200, message: flow
rule rejected by device
Fail to start port 0
Configuring Port 1 (socket 1)
net_mlx4: 0x55e85e1b8c00: cannot attach flow rules (code 95, "Operation
not supported"), flow error type 2, cause 0x220040cd80, message: flow
rule rejected by device
Fail to start port 1
Please stop the ports first
Done
No commandline core given, start packet forwarding
Not all ports were started
Press enter to exit
It seems that indeed there is no such driver as "mlx_pci", as the 'find
/ -name "*mlx5_pci*" ' command gives no results.
Do I need to install something?
I tried doing some Google search for 'common_mlx5: Failed to load driver
= mlx5_pci.', but the hits did not help. As for the first hit (
https://inbox.dpdk.org/users/CAE4=ssdsn7_cfmos5zf-3feblhn2af8+twjco2t4xadnwtv...@mail.gmail.com/T/
), the result of the recommended checking is the following:
root@dut3:~# lsmod | grep mlx5_
mlx5_ib 385024 0
ib_uverbs 167936 3 mlx4_ib,rdma_ucm,mlx5_ib
ib_core 413696 11
rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,iw_cm,ib_iser,ib_umad,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm
mlx5_core 1183744 1 mlx5_ib
mlxfw 32768 1 mlx5_core
ptp 32768 4 igb,mlx4_en,mlx5_core,ixgbe
pci_hyperv_intf 16384 1 mlx5_core
The software versions are the following ones:
root@dut3:~# cat /etc/debian_version
11.9
root@dut3:~# uname -r
5.10.0-27-amd64
root@dut3:~# apt list dpdk
Listing... Done
dpdk/oldstable,now 20.11.10-1~deb11u1 amd64 [installed]
I also tried using the uio_pci_generic instead of vfio-pci, and the
result was the same.
However, everything works fine with an X540-AT2 NIC.
Please advise me, how to resolve the issue!
As you may have noticed, there is another Mellanox NIC in the server,
but with that one, the situation is even worse. The two ports have the
same PCI address, and thus I cannot bind a DPDK compatible drive to its
second port. Here is the dpdk-devbind output for the card:
0000:82:00.0 'MT27520 Family [ConnectX-3 Pro] 1007'
if=enp130s0d1,enp130s0 drv=mlx4_core unused=vfio-pci,uio_pci_generic
Your advice for resolving that issue would also be helpful to me. :-)
Thank you very much in advance!
Best regards,
Gábor Lencse