Re: ConnectX5 Setup with DPDK

2022-02-25 Thread Thomas Monjalon
25/02/2022 19:29, Aaron Lee:
> Hi Thomas,
> 
> I was doing some more testing and wanted to increase the RX queues for the
> CX5 but was wondering how I could do that. I see in the usage example in
> the docs, I could pass in --rxq=2 --txq=2 to set the queues to 2 each but I
> don't see that in my output when I run the command. Below is the output
> from running the command in
> https://doc.dpdk.org/guides/nics/mlx5.html#usage-example. Does this mean
> that the MCX515A-CCAT I have can't support more than 1 queue or am I
> supposed to configure another setting?

I see nothing about the number of queues in your output.
You should try the command "show config rxtx".


> EAL: Detected 80 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1)
> mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.
> EAL: No legacy callbacks, legacy socket not created
> Interactive-mode selected
> testpmd: create a new mbuf pool : n=203456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool : n=203456, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> 
> Warning! port-topology=paired and odd forward ports number, the last port
> will pair with itself.
> 
> Configuring Port 0 (socket 1)
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> Port 0: EC:0D:9A:68:21:A8
> Checking link statuses...
> Done
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> 
> Best,
> Aaron







Re: ConnectX5 Setup with DPDK

2022-02-25 Thread Aaron Lee
Hi Thomas,

I was doing some more testing and wanted to increase the RX queues for the
CX5 but was wondering how I could do that. I see in the usage example in
the docs, I could pass in --rxq=2 --txq=2 to set the queues to 2 each but I
don't see that in my output when I run the command. Below is the output
from running the command in
https://doc.dpdk.org/guides/nics/mlx5.html#usage-example. Does this mean
that the MCX515A-CCAT I have can't support more than 1 queue or am I
supposed to configure another setting?

EAL: Detected 80 lcore(s)

EAL: Detected 2 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'PA'

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0 (socket 1)

mlx5_pci: Size 0x is not power of 2, will be aligned to 0x1.

EAL: No legacy callbacks, legacy socket not created

Interactive-mode selected

testpmd: create a new mbuf pool : n=203456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc

testpmd: create a new mbuf pool : n=203456, size=2176, socket=1

testpmd: preferred mempool ops selected: ring_mp_mc


Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.


Configuring Port 0 (socket 1)

mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).

mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).

mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).

mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).

Port 0: EC:0D:9A:68:21:A8

Checking link statuses...

Done

mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).

Best,
Aaron

On Mon, Feb 21, 2022 at 11:10 PM Thomas Monjalon 
wrote:

> 21/02/2022 21:10, Aaron Lee:
> > Hi Thomas,
> >
> > Actually I remembered in my previous setup I had run dpdk-devbind.py to
> > bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do
> > this and just wanted to confirm that this is correct.
>
> Indeed, mlx5 PMD runs on top of mlx5 kernel driver.
> We don't need UIO or VFIO drivers.
> The kernel modules must remain loaded and can be used in the same time.
> When DPDK is working, the traffic goes to the userspace PMD by default,
> but it is possible to configure some flows to go directly to the kernel
> driver.
> This behaviour is called "bifurcated model".
>
>
> > On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee  wrote:
> >
> > > Hi Thomas,
> > >
> > > I tried installing things from scratch two days ago and have gotten
> > > things working! I think part of the problem was figuring out the
> correct
> > > hugepage allocation for my system. If I recall correctly, I tried
> setting
> > > up my system with default page size 1G but perhaps didn't have enough
> pages
> > > allocated at the time. Currently have the following which gives me the
> > > output you've shown previously.
> > >
> > > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
> > > Node Pages Size Total
> > > 0161Gb16Gb
> > > 1161Gb16Gb
> > >
> > > root@yeti-04:~/dpdk-21.11# echo show port summary all |
> > > build/app/dpdk-testpmd --in-memory -- -i
> > > EAL: Detected CPU lcores: 80
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Detected static linkage of DPDK
> > > EAL: Selected IOVA mode 'PA'
> > > EAL: No free 2048 kB hugepages reported on node 0
> > > EAL: No free 2048 kB hugepages reported on node 1
> > > EAL: No available 2048 kB hugepages reported
> > > EAL: VFIO support initialized
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :af:00.0
> (socket 1)
> > > TELEMETRY: No legacy callbacks, legacy socket not created
> > > Interactive-mode selected
> > > testpmd: create a new mbuf pool : n=779456, size=2176,
> socket=0
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > testpmd: create a new mbuf pool : n=779456, size=2176,
> socket=1
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > >
> > > Warning! port-topology=paired and odd forward ports number, the last
> port
> > > will pair with itself.
> > >
> > > Configuring Port 0 (socket 1)
> > > Port 0: EC:0D:9A:68:21:A8
> > > Checking link statuses...
> > > Done
> > > testpmd> show port summary all
> > > Number of available ports: 1
> > > Port MAC Address   Name Driver Status   Link
> > > 0EC:0D:9A:68:21:A8 :af:00.0 mlx5_pci   up   100 Gbps
> > >
> > > Best,
> > > Aaron
> > >
> > > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon 
> > > wrote:
> > >
> > >> 21/02/2022 19:52, Thomas Monjalon:
> > >> > 18/02/2022 22:12, Aaron Lee:
> > >> > > Hello,
> > >> > >
> > >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but
> I'm
> > >> > > wondering if the card I have simply isn't compatible. I first
> noticed
> > >> that
> > >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the
> > >> error
> > >> > > 

Feature request: MLX5 DPDK flow item type RAW support

2022-02-25 Thread Vladimir Yesin
Current DPDK 21.11 flow API does not support RTE_FLOW_ITEM_TYPE_RAW for
MLX5.

I need support of RTE_FLOW_ITEM_TYPE_RAW in DPDK flow API to enqueue some
ingress packets by content to GPU with support of GPUDirect RDMA and other
to CPU via distinct HW queues (RTE_FLOW_ACTION_TYPE_QUEUE).

For now RTE_FLOW_ITEM_TYPE_UDP and RTE_FLOW_ITEM_TYPE_IPV4 filtering and
enqueueing with address and ports are supported.

Are there any plans to support RTE_FLOW_ITEM_TYPE_RAW for MLX5?