Hi,

The first version: 
https://patches.dpdk.org/project/dpdk/patch/20250423122807.121990-1-viachesl...@nvidia.com/

With best regards,
Slava

> -----Original Message-----
> From: Slava Ovsiienko <viachesl...@nvidia.com>
> Sent: Monday, March 31, 2025 4:54 PM
> To: Lukáš Šišmiš <sis...@cesnet.cz>; users@dpdk.org
> Subject: RE: Determining vendor and model from the port ID
> 
> Hi, Lukas
> 
> Thank you for the confirmation.
> 
> > Do you think this can be changed officially in the DPDK versions?
> Yes, now considering the official patch to have this mitigation.
> And, in addition, the conversion errors to warnings like
> "The requested queue capacity is not guaranteed".
> 
> With best regards,
> Slava
> 
> > -----Original Message-----
> > From: Lukáš Šišmiš <sis...@cesnet.cz>
> > Sent: Friday, March 28, 2025 2:52 PM
> > To: Slava Ovsiienko <viachesl...@nvidia.com>; users@dpdk.org
> > Subject: Re: Determining vendor and model from the port ID
> >
> > Hi Slava,
> >
> > thanks for your reply (and a detailed explanation!), the patch works and
> > I can see 32k-long RX/TX queues configured on the ConnectX-4 card.
> > (MCX416A-CCAT) - I tried it on DPDK v24.11.1
> > Do you think this can be changed officially in the DPDK versions?
> >
> > Thank you.
> >
> > Best,
> >
> > Lukas
> >
> > On 3/25/25 22:57, Slava Ovsiienko wrote:
> > > Hi, Lukas
> > >
> > > Some older NICs (depending on HW generation and on FW configuration)
> > may require
> > > the packet data to be inline (incapsulated) into the WQE (hardware
> > descriptor of the Tx queue)
> > > to allow some steering engine features work correctly. The minimal
> number
> > of bytes to be inline
> > > is obtained by DPDK from FW (it reports the needed L2/L3/L4/tunnel
> > headers and mlx5 PMD
> > > calculates the size in bytes), and the minimal is 18B (if any).
> > >
> > > Then, the Tx queue has absolute limitation for number of descriptors, for
> > CX4/5/6/7 it is 32K
> > > (typical value, also queried from FW (dev_cap.max_qp_wr). Application
> also
> > asks for the given
> > > Tx queue capacity, specifying the number of abstract descriptor, to make
> > sure Tx queue can store
> > > the given number of packets. For the maximal allowed queue size the
> > txq_calc_inline_max()
> > > routine returns 12B (MLX5_DSEG_MIN_INLINE_SIZE). This is the maximal
> > number of inline bytes
> > > to keep WQE size small enough and guarantee the requested number of
> > WQEs within queue
> > > of limited size.
> > >
> > > So, it seems ConnectX-4 on your site requires 18B of inline data, but for
> the
> > maximal queue size
> > > the WQE size is 64B and we can inline only 12B. The good news is that
> > txq_calc_inline_max()
> > > estimation is too conservative, and in this reality we can inline 18B. I 
> > > think
> > we can replace the
> > > MLX5_DSEG_MIN_INLINE_SIZE with MLX5_ESEG_MIN_INLINE_SIZE in the
> > txq_calc_inline_max()
> > > and try. Could you, please ?
> > >
> > > diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> > > index 3e93517323..1913094e5c 100644
> > > --- a/drivers/net/mlx5/mlx5_txq.c
> > > +++ b/drivers/net/mlx5/mlx5_txq.c
> > > @@ -739,7 +739,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
> > >                     MLX5_WQE_ESEG_SIZE -
> > >                     MLX5_WSEG_SIZE -
> > >                     MLX5_WSEG_SIZE +
> > > -                  MLX5_DSEG_MIN_INLINE_SIZE;
> > > +                  MLX5_ESEG_MIN_INLINE_SIZE;
> > >          return wqe_size;
> > >   }
> > >
> > > With best regards,
> > > Slava
> > >
> > >
> > >
> > >> -----Original Message-----
> > >> From: Lukáš Šišmiš <sis...@cesnet.cz>
> > >> Sent: Tuesday, March 25, 2025 3:33 PM
> > >> To: users@dpdk.org; Dariusz Sosnowski <dsosnow...@nvidia.com>; Slava
> > >> Ovsiienko <viachesl...@nvidia.com>
> > >> Subject: Determining vendor and model from the port ID
> > >>
> > >> Hello all,
> > >>
> > >> I am trying to determine what is the vendor and model of the port ID
> > >> that I am interacting with but all references lead me to an obsolete API.
> > >>
> > >> The goal is to execute specific code only when I am dealing with
> > >> Mellanox ConnectX-4-family cards. Longer explanation below.
> > >>
> > >> I would like to access "struct rte_pci_id" but it always seems hidden
> > >> only on the driver level.
> > >>
> > >> Is there any way how to approach this?
> > >>
> > >>
> > >> Longer explanation of the problem:
> > >>
> > >> In https://github.com/OISF/suricata/pull/12654 I am using dev_info to
> > >> get the maximum number of allowed TX descriptors for the port that is
> > >> advertised by the PMD. But when I set the actual number of TX
> > >> descriptors then the driver complains "minimal data inline requirements
> > >> (18) are not satisfied (12) on port 0, try the smaller Tx queue size
> > >> (32768)". However, this problem occurs only on ConnectX-4 family and
> not
> > >> on CX5/6/7 (that's why I cannot limit this to just mlx5 PMD).
> > >>
> > >> Alternatively, can this be fixed/addressed directly in the MLX5 PMD?
> > >> MLX5 PMD needs to advertise 16384 TX descriptors as the maximum only
> > for
> > >> ConnectX-4 family.
> > >> (Putting Darius, Viacheslav in the loop, please reassign if needed)
> > >>
> > >> Thank you.
> > >>
> > >> Best,
> > >>
> > >> Lukas

Reply via email to