25/02/2022 19:29, Aaron Lee: > Hi Thomas, > > I was doing some more testing and wanted to increase the RX queues for the > CX5 but was wondering how I could do that. I see in the usage example in > the docs, I could pass in --rxq=2 --txq=2 to set the queues to 2 each but I > don't see that in my output when I run the command. Below is the output > from running the command in > https://doc.dpdk.org/guides/nics/mlx5.html#usage-example. Does this mean > that the MCX515A-CCAT I have can't support more than 1 queue or am I > supposed to configure another setting?
I see nothing about the number of queues in your output. You should try the command "show config rxtx". > EAL: Detected 80 lcore(s) > EAL: Detected 2 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'PA' > EAL: Probing VFIO support... > EAL: VFIO support initialized > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1) > mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000. > EAL: No legacy callbacks, legacy socket not created > Interactive-mode selected > testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > testpmd: create a new mbuf pool <mb_pool_1>: n=203456, size=2176, socket=1 > testpmd: preferred mempool ops selected: ring_mp_mc > > Warning! port-topology=paired and odd forward ports number, the last port > will pair with itself. > > Configuring Port 0 (socket 1) > mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil). > mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil). > mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil). > mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil). > Port 0: EC:0D:9A:68:21:A8 > Checking link statuses... > Done > mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil). > > Best, > Aaron