On Tue, Mar 03, 2026 at 12:56:35PM +0100, Paolo Abeni wrote:
> On 2/27/26 11:15 AM, Dipayaan Roy wrote:
> > diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c 
> > b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > index 91c418097284..a53a8921050b 100644
> > --- a/drivers/net/ethernet/microsoft/mana/mana_en.c
> > +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
> > @@ -748,6 +748,26 @@ static void *mana_get_rxbuf_pre(struct mana_rxq *rxq, 
> > dma_addr_t *da)
> >     return va;
> >  }
> >  
> > +static inline bool
> > +mana_use_single_rxbuf_per_page(struct mana_port_context *apc, u32 mtu)
> > +{
> 
> I almost forgot: please avoid the 'inline' keyword in .c files. This is
> function used only once, should be inlined by the compiler anyway.
>
Ack, will remove it in v2.
> > +   struct gdma_context *gc = apc->ac->gdma_dev->gdma_context;
> > +
> > +   /* On some systems with 4K PAGE_SIZE, page_pool RX fragments can
> > +    * trigger a throughput regression. Hence forces one RX buffer per page
> > +    * to avoid the fragment allocation/refcounting overhead in the RX
> > +    * refill path for those processors only.
> > +    */
> > +   if (gc->force_full_page_rx_buffer)
> > +           return true;
> 
> Side note: since you could keep the above flag up2date according to the
> current mtu and xdp configuration and just test it in the data path.
> 
If not an issue, would like to keep it this way for better readability.
> /P
> 

Regrads

Reply via email to