> From: Bruce Richardson [mailto:[email protected]] > Sent: Wednesday, 14 January 2026 18.01 > > On Mon, Dec 15, 2025 at 12:06:38PM +0100, Morten Brørup wrote: > > Executive Summary: > > > > My analysis shows that the mbuf library is not a barrier for fast- > freeing > > segmented packet mbufs, and thus fast-free of jumbo frames is > possible. > > > > > > Detailed Analysis: > > > > The purpose of the mbuf fast-free Tx optimization is to reduce > > rte_pktmbuf_free_seg() to something much simpler in the ethdev > drivers, by > > eliminating the code path related to indirect mbufs. > > Optimally, we want to simplify the ethdev driver's function that > frees the > > transmitted mbufs, so it can free them directly to their mempool > without > > accessing the mbufs themselves. > > > > If the driver cannot access the mbuf itself, it cannot determine > which > > mempool it belongs to. > > We don't want the driver to access every mbuf being freed; but if all > > mbufs of a Tx queue belong to the same mempool, the driver can > determine > > which mempool by looking into just one of the mbufs. > > > > <snip> > > > > > If I'm not mistaken, the mbuf library is not a barrier for fast- > freeing > > segmented packet mbufs, and thus fast-free of jumbo frames is > possible. > > > > We need a driver developer to confirm that my suggested approach - > > resetting the mbuf fields, incl. 'm->nb_segs' and 'm->next', when > > preparing the Tx descriptor - is viable. > > > > Just to make sure I understand things correctly here, the suggestion to > prototype is: > > - When FAST_FREE flag is set: > - reset the m->nb_segs and m->next fields (if necessary) when > accessing > the mbuf to write the descriptor > - skip calling pre-free seg on cleanup and instead > - just free all buffers directly to the mempool > > Is that correct?
Yes. If this can be done with multi-segment packets, we should be able to eliminate the single-segment requirement to FAST_FREE. (Unless something in the code that writes the descriptor requires single-segment to be super performant, as I suspected of vectorization.)

