On 2/19/24 13:37, Jiri Pirko wrote:
Mon, Feb 19, 2024 at 11:05:57AM CET, [email protected] wrote:
From: Lukasz Czapnik <[email protected]>
It was observed that Tx performance was inconsistent across all queues
and/or VSIs and that it was directly connected to existing 9-layer
topology of the Tx scheduler.
Introduce new private devlink param - tx_scheduling_layers. This parameter
gives user flexibility to choose the 5-layer transmit scheduler topology
which helps to smooth out the transmit performance.
Allowed parameter values are 5 and 9.
Example usage:
Show:
devlink dev param show pci/0000:4b:00.0 name tx_scheduling_layers
pci/0000:4b:00.0:
name tx_scheduling_layers type driver-specific
values:
cmode permanent value 9
Set:
devlink dev param set pci/0000:4b:00.0 name tx_scheduling_layers value 5
cmode permanent
This is kind of proprietary param similar to number of which were shot
not sure if this is the same kind of param, but for sure proprietary one
down for mlx5 in past. Jakub?
I'm not that familiar with the history/ies around mlx5, but this case is
somewhat different, at least for me:
we have a performance fix for the tree inside the FW/HW, while you
(IIRC) were about to introduce some nice and general abstraction layer,
which could be used by other HW vendors too, but instead it was mlx-only
Also, given this is apparently nvconfig configuration, there could be
probably more suitable to use some provisioning tool.
TBH, we will want to add some other NVM related params, but that does
not justify yet another tool to configure PF. (And then there would be
a big debate if FW update should be moved there too for consistency).
This is related to the mlx5 misc driver.
Until be figure out the plan, this has my nack:
NAcked-by: Jiri Pirko <[email protected]>
IMO this is an easy case, but would like to hear from netdev maintainers