On 1/9/26 12:28 PM, Pavel Begunkov wrote: > From: Jakub Kicinski <[email protected]> > > The driver tries to provision more agg buffers than header buffers > since multiple agg segments can reuse the same header. The calculation > / heuristic tries to provide enough pages for 65k of data for each header > (or 4 frags per header if the result is too big). This calculation is > currently global to the adapter. If we increase the buffer sizes 8x > we don't want 8x the amount of memory sitting on the rings. > Luckily we don't have to fill the rings completely, adjust > the fill level dynamically in case particular queue has buffers > larger than the global size. > > Signed-off-by: Jakub Kicinski <[email protected]> > [pavel: rebase on top of agg_size_fac, assert agg_size_fac] > Signed-off-by: Pavel Begunkov <[email protected]> > --- > drivers/net/ethernet/broadcom/bnxt/bnxt.c | 28 +++++++++++++++++++---- > 1 file changed, 24 insertions(+), 4 deletions(-) > > diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c > b/drivers/net/ethernet/broadcom/bnxt/bnxt.c > index 8f42885a7c86..137e348d2b9c 100644 > --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c > +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c > @@ -3816,16 +3816,34 @@ static void bnxt_free_rx_rings(struct bnxt *bp) > } > } > > +static int bnxt_rx_agg_ring_fill_level(struct bnxt *bp, > + struct bnxt_rx_ring_info *rxr) > +{ > + /* User may have chosen larger than default rx_page_size, > + * we keep the ring sizes uniform and also want uniform amount > + * of bytes consumed per ring, so cap how much of the rings we fill. > + */ > + int fill_level = bp->rx_agg_ring_size; > + > + if (rxr->rx_page_size > BNXT_RX_PAGE_SIZE) > + fill_level /= rxr->rx_page_size / BNXT_RX_PAGE_SIZE;
According to the check in bnxt_alloc_rx_page_pool() it's theoretically possible for `rxr->rx_page_size / BNXT_RX_PAGE_SIZE` being zero. If so the above would crash. Side note: this looks like something AI review could/should catch. The fact it didn't makes me think I'm missing something... /P
