Re: ConnectX5 Setup with DPDK
21/02/2022 21:10, Aaron Lee: > Hi Thomas, > > Actually I remembered in my previous setup I had run dpdk-devbind.py to > bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do > this and just wanted to confirm that this is correct. Indeed, mlx5 PMD runs on top of mlx5 kernel driver. We don't need UIO or VFIO drivers. The kernel modules must remain loaded and can be used in the same time. When DPDK is working, the traffic goes to the userspace PMD by default, but it is possible to configure some flows to go directly to the kernel driver. This behaviour is called "bifurcated model". > On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee wrote: > > > Hi Thomas, > > > > I tried installing things from scratch two days ago and have gotten > > things working! I think part of the problem was figuring out the correct > > hugepage allocation for my system. If I recall correctly, I tried setting > > up my system with default page size 1G but perhaps didn't have enough pages > > allocated at the time. Currently have the following which gives me the > > output you've shown previously. > > > > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s > > Node Pages Size Total > > 0161Gb16Gb > > 1161Gb16Gb > > > > root@yeti-04:~/dpdk-21.11# echo show port summary all | > > build/app/dpdk-testpmd --in-memory -- -i > > EAL: Detected CPU lcores: 80 > > EAL: Detected NUMA nodes: 2 > > EAL: Detected static linkage of DPDK > > EAL: Selected IOVA mode 'PA' > > EAL: No free 2048 kB hugepages reported on node 0 > > EAL: No free 2048 kB hugepages reported on node 1 > > EAL: No available 2048 kB hugepages reported > > EAL: VFIO support initialized > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1) > > TELEMETRY: No legacy callbacks, legacy socket not created > > Interactive-mode selected > > testpmd: create a new mbuf pool : n=779456, size=2176, socket=0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > testpmd: create a new mbuf pool : n=779456, size=2176, socket=1 > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > Warning! port-topology=paired and odd forward ports number, the last port > > will pair with itself. > > > > Configuring Port 0 (socket 1) > > Port 0: EC:0D:9A:68:21:A8 > > Checking link statuses... > > Done > > testpmd> show port summary all > > Number of available ports: 1 > > Port MAC Address Name Driver Status Link > > 0EC:0D:9A:68:21:A8 :af:00.0 mlx5_pci up 100 Gbps > > > > Best, > > Aaron > > > > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon > > wrote: > > > >> 21/02/2022 19:52, Thomas Monjalon: > >> > 18/02/2022 22:12, Aaron Lee: > >> > > Hello, > >> > > > >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm > >> > > wondering if the card I have simply isn't compatible. I first noticed > >> that > >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the > >> error > >> > > logs when running dpdk-pdump. > >> > > >> > When testing a NIC, it is more convenient to use dpdk-testpmd. > >> > > >> > > EAL: Detected CPU lcores: 80 > >> > > EAL: Detected NUMA nodes: 2 > >> > > EAL: Detected static linkage of DPDK > >> > > EAL: Multi-process socket > >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > >> file or > >> > > directory > >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp > >> > > vdev_scan(): Failed to request vdev from primary > >> > > EAL: Selected IOVA mode 'PA' > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > >> file or > >> > > directory > >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync > >> > > EAL: Cannot request default VFIO container fd > >> > > EAL: VFIO support could not be initialized > >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 > >> (socket 1) > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > >> file or > >> > > directory > >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp > >> > > mlx5_common: port 0 request to primary process failed > >> > > mlx5_net: probe of PCI device :af:00.0 aborted after encountering > >> an > >> > > error: No such file or directory > >> > > mlx5_common: Failed to load driver mlx5_eth > >> > > EAL: Requested device :af:00.0 cannot be used > >> > > EAL: Error - exiting with code: 1 > >> > > Cause: No Ethernet ports - bye > >> > > >> > From this log, we miss the previous steps before running the > >> application. > >> > > >> > Please check these simple steps: > >> > - install rdma-core > >> > - build dpdk (meson build && ninja -C build) > >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) > >> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd > >> --in-memory -- -i) > >> > > >> > EAL: Detected CPU lcores: 10 > >> > EAL:
Re: Are Intel CPUs better than AMD CPUs for DPDK applications?
On Mon, 21 Feb 2022 21:28:08 +0100 Staffan Wiklund wrote: > Stephen, thanks for your answer. > I realize the statement is very vague. > I was thinking of if there was something common in the design of Intel and > AMD CPUs respectively that has an impact on their use by DPDK applications. > Do you know if there is such a common design difference between Intel and > AMD CPUs or is it just a matter of using an Intel or AMD CPU with the > requested performance? > > Regards > Staffan I am not a CPU expert. But compare memory bandwidth, clock rate, PCI Express version and support for AVX support (for some features). There a few places in DPDK that can use AVX512 but it is limited https://doc.dpdk.org/guides/howto/avx512.html Also there is a tradeoff with more cores, NUMA, etc as well as cost. Don't believe simple tribal knowledge, you need to look under the covers.
Re: Are Intel CPUs better than AMD CPUs for DPDK applications?
Stephen, thanks for your answer. I realize the statement is very vague. I was thinking of if there was something common in the design of Intel and AMD CPUs respectively that has an impact on their use by DPDK applications. Do you know if there is such a common design difference between Intel and AMD CPUs or is it just a matter of using an Intel or AMD CPU with the requested performance? Regards Staffan Den mån 21 feb. 2022 kl 18:08 skrev Stephen Hemminger < step...@networkplumber.org>: > On Mon, 21 Feb 2022 16:14:26 +0100 > Staffan Wiklund wrote: > > > Hello > > > > Do you know if there is a difference in support for DPDK that is > provided by > > Intel and AMD CPUs respectively? > > > > I talked to a person today claiming he has learned that Intel CPUs have > > been preferred over AMD CPUs for some DPDK applications. > > This person had no document showing reasons for this. > > > > Do you have any information on this? > > If there is a difference, will the difference be negligible do you think > > from a > > technical perspective? > > > > Regards > > Staffan > > There is no one Intel CPU and one AMD CPU. There are lots of types so the > statement is very vague and misleading. >
Re: ConnectX5 Setup with DPDK
Hi Thomas, Actually I remembered in my previous setup I had run dpdk-devbind.py to bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do this and just wanted to confirm that this is correct. Best, Aaron On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee wrote: > Hi Thomas, > > I tried installing things from scratch two days ago and have gotten > things working! I think part of the problem was figuring out the correct > hugepage allocation for my system. If I recall correctly, I tried setting > up my system with default page size 1G but perhaps didn't have enough pages > allocated at the time. Currently have the following which gives me the > output you've shown previously. > > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s > Node Pages Size Total > 0161Gb16Gb > 1161Gb16Gb > > root@yeti-04:~/dpdk-21.11# echo show port summary all | > build/app/dpdk-testpmd --in-memory -- -i > EAL: Detected CPU lcores: 80 > EAL: Detected NUMA nodes: 2 > EAL: Detected static linkage of DPDK > EAL: Selected IOVA mode 'PA' > EAL: No free 2048 kB hugepages reported on node 0 > EAL: No free 2048 kB hugepages reported on node 1 > EAL: No available 2048 kB hugepages reported > EAL: VFIO support initialized > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1) > TELEMETRY: No legacy callbacks, legacy socket not created > Interactive-mode selected > testpmd: create a new mbuf pool : n=779456, size=2176, socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > testpmd: create a new mbuf pool : n=779456, size=2176, socket=1 > testpmd: preferred mempool ops selected: ring_mp_mc > > Warning! port-topology=paired and odd forward ports number, the last port > will pair with itself. > > Configuring Port 0 (socket 1) > Port 0: EC:0D:9A:68:21:A8 > Checking link statuses... > Done > testpmd> show port summary all > Number of available ports: 1 > Port MAC Address Name Driver Status Link > 0EC:0D:9A:68:21:A8 :af:00.0 mlx5_pci up 100 Gbps > > Best, > Aaron > > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon > wrote: > >> 21/02/2022 19:52, Thomas Monjalon: >> > 18/02/2022 22:12, Aaron Lee: >> > > Hello, >> > > >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm >> > > wondering if the card I have simply isn't compatible. I first noticed >> that >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the >> error >> > > logs when running dpdk-pdump. >> > >> > When testing a NIC, it is more convenient to use dpdk-testpmd. >> > >> > > EAL: Detected CPU lcores: 80 >> > > EAL: Detected NUMA nodes: 2 >> > > EAL: Detected static linkage of DPDK >> > > EAL: Multi-process socket >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp >> > > vdev_scan(): Failed to request vdev from primary >> > > EAL: Selected IOVA mode 'PA' >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync >> > > EAL: Cannot request default VFIO container fd >> > > EAL: VFIO support could not be initialized >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 >> (socket 1) >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp >> > > mlx5_common: port 0 request to primary process failed >> > > mlx5_net: probe of PCI device :af:00.0 aborted after encountering >> an >> > > error: No such file or directory >> > > mlx5_common: Failed to load driver mlx5_eth >> > > EAL: Requested device :af:00.0 cannot be used >> > > EAL: Error - exiting with code: 1 >> > > Cause: No Ethernet ports - bye >> > >> > From this log, we miss the previous steps before running the >> application. >> > >> > Please check these simple steps: >> > - install rdma-core >> > - build dpdk (meson build && ninja -C build) >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) >> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd >> --in-memory -- -i) >> > >> > EAL: Detected CPU lcores: 10 >> > EAL: Detected NUMA nodes: 1 >> > EAL: Detected static linkage of DPDK >> > EAL: Selected IOVA mode 'PA' >> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: :08:00.0 >> (socket 0) >> > Interactive-mode selected >> > testpmd: create a new mbuf pool : n=219456, size=2176, >> socket=0 >> > testpmd: preferred mempool ops selected: ring_mp_mc >> > Configuring Port 0 (socket 0) >> > Port 0: 0C:42:A1:D6:E0:00 >> > Checking link statuses... >> > Done >> > testpmd> show port summary all >> > Number of available ports: 1 >> > Port MAC Address Name Driver Status Link >> > 0
Re: ConnectX5 Setup with DPDK
Hi Thomas, I tried installing things from scratch two days ago and have gotten things working! I think part of the problem was figuring out the correct hugepage allocation for my system. If I recall correctly, I tried setting up my system with default page size 1G but perhaps didn't have enough pages allocated at the time. Currently have the following which gives me the output you've shown previously. root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s Node Pages Size Total 0161Gb16Gb 1161Gb16Gb root@yeti-04:~/dpdk-21.11# echo show port summary all | build/app/dpdk-testpmd --in-memory -- -i EAL: Detected CPU lcores: 80 EAL: Detected NUMA nodes: 2 EAL: Detected static linkage of DPDK EAL: Selected IOVA mode 'PA' EAL: No free 2048 kB hugepages reported on node 0 EAL: No free 2048 kB hugepages reported on node 1 EAL: No available 2048 kB hugepages reported EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1) TELEMETRY: No legacy callbacks, legacy socket not created Interactive-mode selected testpmd: create a new mbuf pool : n=779456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=779456, size=2176, socket=1 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=paired and odd forward ports number, the last port will pair with itself. Configuring Port 0 (socket 1) Port 0: EC:0D:9A:68:21:A8 Checking link statuses... Done testpmd> show port summary all Number of available ports: 1 Port MAC Address Name Driver Status Link 0EC:0D:9A:68:21:A8 :af:00.0 mlx5_pci up 100 Gbps Best, Aaron On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon wrote: > 21/02/2022 19:52, Thomas Monjalon: > > 18/02/2022 22:12, Aaron Lee: > > > Hello, > > > > > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm > > > wondering if the card I have simply isn't compatible. I first noticed > that > > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error > > > logs when running dpdk-pdump. > > > > When testing a NIC, it is more convenient to use dpdk-testpmd. > > > > > EAL: Detected CPU lcores: 80 > > > EAL: Detected NUMA nodes: 2 > > > EAL: Detected static linkage of DPDK > > > EAL: Multi-process socket > /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 > > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > file or > > > directory > > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp > > > vdev_scan(): Failed to request vdev from primary > > > EAL: Selected IOVA mode 'PA' > > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > file or > > > directory > > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync > > > EAL: Cannot request default VFIO container fd > > > EAL: VFIO support could not be initialized > > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 > (socket 1) > > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > file or > > > directory > > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp > > > mlx5_common: port 0 request to primary process failed > > > mlx5_net: probe of PCI device :af:00.0 aborted after encountering > an > > > error: No such file or directory > > > mlx5_common: Failed to load driver mlx5_eth > > > EAL: Requested device :af:00.0 cannot be used > > > EAL: Error - exiting with code: 1 > > > Cause: No Ethernet ports - bye > > > > From this log, we miss the previous steps before running the application. > > > > Please check these simple steps: > > - install rdma-core > > - build dpdk (meson build && ninja -C build) > > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) > > - run testpmd (echo show port summary all | build/app/dpdk-testpmd > --in-memory -- -i) > > > > EAL: Detected CPU lcores: 10 > > EAL: Detected NUMA nodes: 1 > > EAL: Detected static linkage of DPDK > > EAL: Selected IOVA mode 'PA' > > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: :08:00.0 (socket > 0) > > Interactive-mode selected > > testpmd: create a new mbuf pool : n=219456, size=2176, > socket=0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > Configuring Port 0 (socket 0) > > Port 0: 0C:42:A1:D6:E0:00 > > Checking link statuses... > > Done > > testpmd> show port summary all > > Number of available ports: 1 > > Port MAC Address Name Driver Status Link > > 00C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps > > > > > I noticed that the pci id of the card I was given is 15b3:1017 as > below. > > > This sort of indicates to me that the PMD driver isn't supported on > this > > > card. > > > > This card is well supported and even officially tested with DPDK 21.11, > > as you can see in the release notes: > > >
Re: ConnectX5 Setup with DPDK
21/02/2022 19:52, Thomas Monjalon: > 18/02/2022 22:12, Aaron Lee: > > Hello, > > > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm > > wondering if the card I have simply isn't compatible. I first noticed that > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error > > logs when running dpdk-pdump. > > When testing a NIC, it is more convenient to use dpdk-testpmd. > > > EAL: Detected CPU lcores: 80 > > EAL: Detected NUMA nodes: 2 > > EAL: Detected static linkage of DPDK > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or > > directory > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp > > vdev_scan(): Failed to request vdev from primary > > EAL: Selected IOVA mode 'PA' > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or > > directory > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync > > EAL: Cannot request default VFIO container fd > > EAL: VFIO support could not be initialized > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1) > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or > > directory > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp > > mlx5_common: port 0 request to primary process failed > > mlx5_net: probe of PCI device :af:00.0 aborted after encountering an > > error: No such file or directory > > mlx5_common: Failed to load driver mlx5_eth > > EAL: Requested device :af:00.0 cannot be used > > EAL: Error - exiting with code: 1 > > Cause: No Ethernet ports - bye > > From this log, we miss the previous steps before running the application. > > Please check these simple steps: > - install rdma-core > - build dpdk (meson build && ninja -C build) > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) > - run testpmd (echo show port summary all | build/app/dpdk-testpmd > --in-memory -- -i) > > EAL: Detected CPU lcores: 10 > EAL: Detected NUMA nodes: 1 > EAL: Detected static linkage of DPDK > EAL: Selected IOVA mode 'PA' > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: :08:00.0 (socket 0) > Interactive-mode selected > testpmd: create a new mbuf pool : n=219456, size=2176, socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > Configuring Port 0 (socket 0) > Port 0: 0C:42:A1:D6:E0:00 > Checking link statuses... > Done > testpmd> show port summary all > Number of available ports: 1 > Port MAC Address Name Driver Status Link > 00C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps > > > I noticed that the pci id of the card I was given is 15b3:1017 as below. > > This sort of indicates to me that the PMD driver isn't supported on this > > card. > > This card is well supported and even officially tested with DPDK 21.11, > as you can see in the release notes: > https://doc.dpdk.org/guides/rel_notes/release_21_11.html#tested-platforms > > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family > > [ConnectX-5] [15b3:1017] > > > > I'd appreciate it if someone has gotten this card to work with DPDK to > > point me in the right direction or if my suspicions were correct that this > > card doesn't work with the PMD. If you want to check which hardware is supported by a PMD, you can use this command: usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so PMD NAME: mlx5_eth PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib PMD HW SUPPORT: Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All Subdevices) Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual Function] (1014) (All Subdevices) Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) (All Subdevices) Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual Function] (1016) (All Subdevices) Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All Subdevices) Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual Function] (1018) (All Subdevices) Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) (All Subdevices) Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual Function] (101a) (All Subdevices) Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5 network controller (a2d2) (All Subdevices) Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family VF (a2d3) (All Subdevices) Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All Subdevices) Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual Function] (101c) (All Subdevices) Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All Subdevices) Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function (101e) (All Subdevices) Mellanox Technologies (15b3) : MT42822
Re: ConnectX5 Setup with DPDK
18/02/2022 22:12, Aaron Lee: > Hello, > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm > wondering if the card I have simply isn't compatible. I first noticed that > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error > logs when running dpdk-pdump. When testing a NIC, it is more convenient to use dpdk-testpmd. > EAL: Detected CPU lcores: 80 > EAL: Detected NUMA nodes: 2 > EAL: Detected static linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or > directory > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp > vdev_scan(): Failed to request vdev from primary > EAL: Selected IOVA mode 'PA' > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or > directory > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync > EAL: Cannot request default VFIO container fd > EAL: VFIO support could not be initialized > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1) > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or > directory > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp > mlx5_common: port 0 request to primary process failed > mlx5_net: probe of PCI device :af:00.0 aborted after encountering an > error: No such file or directory > mlx5_common: Failed to load driver mlx5_eth > EAL: Requested device :af:00.0 cannot be used > EAL: Error - exiting with code: 1 > Cause: No Ethernet ports - bye >From this log, we miss the previous steps before running the application. Please check these simple steps: - install rdma-core - build dpdk (meson build && ninja -C build) - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) - run testpmd (echo show port summary all | build/app/dpdk-testpmd --in-memory -- -i) EAL: Detected CPU lcores: 10 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Selected IOVA mode 'PA' EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: :08:00.0 (socket 0) Interactive-mode selected testpmd: create a new mbuf pool : n=219456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0) Port 0: 0C:42:A1:D6:E0:00 Checking link statuses... Done testpmd> show port summary all Number of available ports: 1 Port MAC Address Name Driver Status Link 00C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps > I noticed that the pci id of the card I was given is 15b3:1017 as below. > This sort of indicates to me that the PMD driver isn't supported on this > card. This card is well supported and even officially tested with DPDK 21.11, as you can see in the release notes: https://doc.dpdk.org/guides/rel_notes/release_21_11.html#tested-platforms > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family > [ConnectX-5] [15b3:1017] > > I'd appreciate it if someone has gotten this card to work with DPDK to > point me in the right direction or if my suspicions were correct that this > card doesn't work with the PMD. Please tell me what drove you into the wrong direction, because I really would like to improve the documentation & tools.
ConnectX5 Setup with DPDK
Hello, I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm wondering if the card I have simply isn't compatible. I first noticed that the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error logs when running dpdk-pdump. EAL: Detected CPU lcores: 80 EAL: Detected NUMA nodes: 2 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or directory EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp vdev_scan(): Failed to request vdev from primary EAL: Selected IOVA mode 'PA' EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or directory EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync EAL: Cannot request default VFIO container fd EAL: VFIO support could not be initialized EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1) EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or directory EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp mlx5_common: port 0 request to primary process failed mlx5_net: probe of PCI device :af:00.0 aborted after encountering an error: No such file or directory mlx5_common: Failed to load driver mlx5_eth EAL: Requested device :af:00.0 cannot be used EAL: Error - exiting with code: 1 Cause: No Ethernet ports - bye I noticed that the pci id of the card I was given is 15b3:1017 as below. This sort of indicates to me that the PMD driver isn't supported on this card. af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017] I'd appreciate it if someone has gotten this card to work with DPDK to point me in the right direction or if my suspicions were correct that this card doesn't work with the PMD. Best, Aaron
Re: Mellanox performance degradation with more than 12 lcores
Thanks for the clarification! I was able to get 148Mpps with 12 lcores after some BIOS tunings. Looks like due to these HW limitations I have to use ring buffer as you suggested to support more than 32 lcores! пт, 18 февр. 2022 г. в 16:40, Dmitry Kozlyuk : > Hi, > > > With more than 12 lcores overall receive performance reduces. > > With 16-32 lcores I get 100-110 Mpps, > > It is more about the number of queues than the number of cores: > 12 queues are the threshold when Multi-Packet Receive Queue (MPRQ) > is automatically enabled in mlx5 PMD. > Try increasing --rxd and check out mprq_en device argument. > Please see mlx5 PMD user guide for details about MPRQ. > You should be able to get full 148 Mpps with your HW. > > > and I get a significant performance fall with 33 lcores - 84Mpps. > > With 63 cores I get even 35Mpps overall receive performance. > > > > Are there any limitations on the total number of receive queues (total > > lcores) that can handle a single port on a given NIC? > > This is a hardware limitation. > The limit on the number of queues you can create is very high (16M), > but performance can perfectly scale only up to 32 queues > at high packet rates (as opposed to bit rates). > Using more queues can even degrade it, just as you observe. > One way to overcome this (not specific to mlx5) > is to use a ring buffer for incoming packets, > from which any number of processing cores can take packets. >
Re: Mellanox performance degradation with more than 12 lcores
I get 125 Mpps from single port using 12 lcores: numactl -N 1 -m 1 /opt/dpdk-21.11/build/app/dpdk-testpmd -l 64-127 -n 4 -a :c1:00.0 -- --stats-period 1 --nb-cores=12 --rxq=12 --txq=12 --rxd=512 With 63 cores i get 35 Mpps: numactl -N 1 -m 1 /opt/dpdk-21.11/build/app/dpdk-testpmd -l 64-127 -n 4 -a :c1:00.0 -- --stats-period 1 --nb-cores=63 --rxq=63 --txq=63 --rxd=512 I'm using this guide as a reference - https://fast.dpdk.org/doc/perf/DPDK_20_11_Mellanox_NIC_performance_report.pdf This reference suggests examples of how to get the best performance but all of them use maximum 12 lcores. 125 Mpps with 12 lcores is nearly the maximum I can get from single 100GB port (148Mpps theoretical maximum for 64byte packet). I just want to understand - why I get good performance with 12 lcores and bad performance with 63 cores? пт, 18 февр. 2022 г. в 16:30, Asaf Penso : > Hello Dmitry, > > Could you please paste the testpmd commands per each experiment? > > Also, have you looked into dpdk.org performance report to see how to tune > for best results? > > Regards, > Asaf Penso > -- > *From:* Дмитрий Степанов > *Sent:* Friday, February 18, 2022 9:32:59 AM > *To:* users@dpdk.org > *Subject:* Mellanox performance degradation with more than 12 lcores > > Hi folks! > > I'm using Mellanox ConnectX-6 Dx EN adapter card (100GbE; Dual-port > QSFP56; PCIe 4.0/3.0 x16) with DPDK 21.11 on a server with AMD EPYC 7702 > 64-Core Processor (NUMA system with 2 sockets). Hyperthreading is turned > off. > I'm testing the maximum receive throughput I can get from a single port > using testpmd utility (shipped with dpdk). My generator produces random UDP > packets with zero payload length. > > I get the maximum performance using 8-12 lcores (overall 120-125Mpps on > receive path of single port): > > numactl -N 1 -m 1 /opt/dpdk-21.11/build/app/dpdk-testpmd -l 64-127 -n 4 > -a :c1:00.0 -- --stats-period 1 --nb-cores=12 --rxq=12 --txq=12 > --rxd=512 > > With more than 12 lcores overall receive performance reduces. With 16-32 > lcores I get 100-110 Mpps, and I get a significant performance fall with 33 > lcores - 84Mpps. With 63 cores I get even 35Mpps overall receive > performance. > > Are there any limitations on the total number of receive queues (total > lcores) that can handle a single port on a given NIC? > > Thanks, > Dmitriy Stepanov >
Re: Are Intel CPUs better than AMD CPUs for DPDK applications?
On Mon, 21 Feb 2022 16:14:26 +0100 Staffan Wiklund wrote: > Hello > > Do you know if there is a difference in support for DPDK that is provided by > Intel and AMD CPUs respectively? > > I talked to a person today claiming he has learned that Intel CPUs have > been preferred over AMD CPUs for some DPDK applications. > This person had no document showing reasons for this. > > Do you have any information on this? > If there is a difference, will the difference be negligible do you think > from a > technical perspective? > > Regards > Staffan There is no one Intel CPU and one AMD CPU. There are lots of types so the statement is very vague and misleading.
Are Intel CPUs better than AMD CPUs for DPDK applications?
Hello Do you know if there is a difference in support for DPDK that is provided by Intel and AMD CPUs respectively? I talked to a person today claiming he has learned that Intel CPUs have been preferred over AMD CPUs for some DPDK applications. This person had no document showing reasons for this. Do you have any information on this? If there is a difference, will the difference be negligible do you think from a technical perspective? Regards Staffan