On Thu, Oct 16, 2025 at 09:44:43AM +0200, Paul Menzel wrote: > Dear Michal, > > > Thank you for the patch. I’d mention the 64 in the summary: >
Sure, I will add it. > > ice: lower default irq/queue counts to 64 on > 64 core systems > > > Am 16.10.25 um 08:22 schrieb Michal Swiatkowski: > > On some high-core systems loading ice driver with default values can > > lead to queue/irq exhaustion. It will result in no additional resources > > for SR-IOV. > > > > In most cases there is no performance reason for more than 64 queues. > > Limit the default value to 64. Still, using ethtool the number of > > queues can be changed up to num_online_cpus(). > > > > This change affects only the default queue amount on systems with more > > than 64 cores. > > Please document a specific system and steps to reproduce the issue. > > Please also document how to override the value. Ok, will add. > > > Reviewed-by: Jacob Keller <[email protected]> > > Signed-off-by: Michal Swiatkowski <[email protected]> > > --- > > drivers/net/ethernet/intel/ice/ice.h | 20 ++++++++++++++++++++ > > drivers/net/ethernet/intel/ice/ice_irq.c | 6 ++++-- > > drivers/net/ethernet/intel/ice/ice_lib.c | 8 ++++---- > > 3 files changed, 28 insertions(+), 6 deletions(-) > > > > diff --git a/drivers/net/ethernet/intel/ice/ice.h > > b/drivers/net/ethernet/intel/ice/ice.h > > index 3d4d8b88631b..354ec2950ff3 100644 > > --- a/drivers/net/ethernet/intel/ice/ice.h > > +++ b/drivers/net/ethernet/intel/ice/ice.h > > @@ -1133,4 +1133,24 @@ static inline struct ice_hw > > *ice_get_primary_hw(struct ice_pf *pf) > > else > > return &pf->adapter->ctrl_pf->hw; > > } > > + > > +/** > > + * ice_capped_num_cpus - normalize the number of CPUs to a reasonable limit > > + * > > + * This function returns the number of online CPUs, but caps it at suitable > > + * default to prevent excessive resource allocation on systems with very > > high > > + * CPU counts. > > + * > > + * Note: suitable default is currently at 64, which is reflected in > > default_cpus > > + * constant. In most cases there is no much benefit for more than 64 and > > it is a > > no*t* much > Will fix > > + * power of 2 number. > > + * > > + * Return: number of online CPUs, capped at suitable default. > > + */ > > +static inline u16 ice_capped_num_cpus(void) > > Why not return `unsigned int` or `size_t`? > Just because u16 is used for queue counts, but I can go with unsigned int, makes more sense as num_online_cpus() is returning unsigned int. > > +{ > > + const int default_cpus = 64; > > + > > + return min(num_online_cpus(), default_cpus); > > +} > > #endif /* _ICE_H_ */ > > diff --git a/drivers/net/ethernet/intel/ice/ice_irq.c > > b/drivers/net/ethernet/intel/ice/ice_irq.c > > index 30801fd375f0..df4d847ca858 100644 > > --- a/drivers/net/ethernet/intel/ice/ice_irq.c > > +++ b/drivers/net/ethernet/intel/ice/ice_irq.c > > @@ -106,9 +106,11 @@ static struct ice_irq_entry *ice_get_irq_res(struct > > ice_pf *pf, > > #define ICE_RDMA_AEQ_MSIX 1 > > static int ice_get_default_msix_amount(struct ice_pf *pf) > > { > > - return ICE_MIN_LAN_OICR_MSIX + num_online_cpus() + > > + u16 cpus = ice_capped_num_cpus(); > > + > > + return ICE_MIN_LAN_OICR_MSIX + cpus + > > (test_bit(ICE_FLAG_FD_ENA, pf->flags) ? ICE_FDIR_MSIX : 0) + > > - (ice_is_rdma_ena(pf) ? num_online_cpus() + ICE_RDMA_AEQ_MSIX : > > 0); > > + (ice_is_rdma_ena(pf) ? cpus + ICE_RDMA_AEQ_MSIX : 0); > > } > > /** > > diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c > > b/drivers/net/ethernet/intel/ice/ice_lib.c > > index bac481e8140d..3c5f8a4b6c6d 100644 > > --- a/drivers/net/ethernet/intel/ice/ice_lib.c > > +++ b/drivers/net/ethernet/intel/ice/ice_lib.c > > @@ -159,12 +159,12 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi) > > static u16 ice_get_rxq_count(struct ice_pf *pf) > > { > > - return min(ice_get_avail_rxq_count(pf), num_online_cpus()); > > + return min(ice_get_avail_rxq_count(pf), ice_capped_num_cpus()); > > } > > static u16 ice_get_txq_count(struct ice_pf *pf) > > { > > - return min(ice_get_avail_txq_count(pf), num_online_cpus()); > > + return min(ice_get_avail_txq_count(pf), ice_capped_num_cpus()); > > } > > /** > > @@ -907,13 +907,13 @@ static void ice_vsi_set_rss_params(struct ice_vsi > > *vsi) > > if (vsi->type == ICE_VSI_CHNL) > > vsi->rss_size = min_t(u16, vsi->num_rxq, max_rss_size); > > else > > - vsi->rss_size = min_t(u16, num_online_cpus(), > > + vsi->rss_size = min_t(u16, ice_capped_num_cpus(), > > max_rss_size); > > vsi->rss_lut_type = ICE_LUT_PF; > > break; > > case ICE_VSI_SF: > > vsi->rss_table_size = ICE_LUT_VSI_SIZE; > > - vsi->rss_size = min_t(u16, num_online_cpus(), max_rss_size); > > + vsi->rss_size = min_t(u16, ice_capped_num_cpus(), max_rss_size); > > vsi->rss_lut_type = ICE_LUT_VSI; > > break; > > case ICE_VSI_VF: > > With the changes addressed, feel free to add: > > Reviewed-by: Paul Menzel <[email protected]> > Thanks > > Kind regards, > > Paul
