-----Original Message-----
From: fengchengwen <[email protected]> 
Sent: 13 March 2026 05:49
To: Talluri, ChaitanyababuX <[email protected]>; [email protected]; 
Richardson, Bruce <[email protected]>; [email protected]; 
Singh, Aman Deep <[email protected]>
Cc: Wani, Shaiq <[email protected]>; [email protected]
Subject: Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard

On 3/12/2026 6:36 PM, Talluri Chaitanyababu wrote:
> Update forwarding TC mask based on configured traffic classes to 
> properly handle both 4 TC and 8 TC modes. The bitmask calculation (1u 
> << nb_tcs) - 1 correctly creates masks for all available traffic 
> classes (0xF for 4 TCs, 0xFF for 8 TCs).
> 
> When the mask is not updated after a TC configuration change, it stays 
> at the default 0xFF, which causes dcb_fwd_tc_update_dcb_info() to skip 
> the compress logic entirely (early return when mask == 
> DEFAULT_DCB_FWD_TC_MASK).
> This can lead to inconsistent queue allocations.

Sorry, I cannot understand your question. Could you please provide some steps 
to reproduce the issue and the problem phenomenon?

Please find the reproduction steps and problem description below.

1.bind 2 port to vfio-pci
./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0 0000:af:00.1
2. start testpmd and reset DCB PFC
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a 0000:af:00.0 -a 
0000:af:00.1 --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 --nb-cores=16 
--total-num-mbufs=600000

testpmd> port stop all
testpmd> port config 0 dcb vt off 8 pfc on
testpmd> port config 1 dcb vt off 8 pfc on
testpmd> port start all
testpmd> port stop all
testpmd> port config 0 dcb vt off 4 pfc on

Test Log: 
root@srv13:~/test-1/dpdk# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 
-n 4 -a 0000:31:00.0 -a 0000:4b:00.0 --file-prefix=testpmd1 -- -i --rxq=256 
--txq=256 --nb-cores=16 --total-num-mbufs=600000
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/testpmd1/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default 
Package (single VLAN mode)
ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default 
Package (single VLAN mode)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=600000, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
Port 0: B4:96:91:9F:5E:B0
Configuring Port 1 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
Port 1: 68:05:CA:A3:13:4C
Checking link statuses...
Done
testpmd> port stop all
Stopping ports...

Port 0: link state change event
Checking link statuses...

Port 1: link state change event
Done
testpmd> port config 0 dcb vt off 8 pfc on
In DCB mode, all forwarding ports must be configured in this mode.
testpmd> port config 1 dcb vt off 8 pfc on
testpmd> port start all
Configuring Port 0 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).

Port 0: link state change event
Port 0: B4:96:91:9F:5E:B0
Configuring Port 1 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
Port 1: 68:05:CA:A3:13:4C
Checking link statuses...
Done
testpmd> port stop all
Stopping ports...

Port 0: link state change event
Checking link statuses...

Port 1: link state change event
Done
testpmd> port config 0 dcb vt off 4 pfc on
Floating point exception

Expected behaviour:

After reconfiguring PFC from 8 to 4 TCs, the forwarding TC mask should reflect
the configured number of TCs (mask = 0xF).
> 
> Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup() 
> only checks RX queue counts, missing the case where the TX port has 
> zero queues for a given pool/TC combination. When nb_tx_queue is 0, 
> the expression "j % nb_tx_queue" triggers a SIGFPE (integer division by zero).

The dcb_fwd_check_cores_per_tc() check this case. So please provide the steps.

> 
> Fix this by:
> 1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
>    user requested num_tcs value, so fwd_config_setup() sees the correct
>    mask.
> 2. Extending the existing pool guard to also check TX queue counts.
> 3. Adding a defensive break after the division by dcb_fwd_tc_cores to
>    catch integer truncation to zero.
> 
> Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
> Cc: [email protected]
> 
> Signed-off-by: Talluri Chaitanyababu 
> <[email protected]>
> Signed-off-by: Shaiq Wani <[email protected]>
> ---

Reply via email to