On Mon, Feb 25, 2019 at 08:44:35AM +0100, Claudio Jeker wrote:
> On Mon, Feb 25, 2019 at 10:49:16AM +1000, David Gwynne wrote:
> > the mcl2k2 pool, aka the intel mbuf cluster pool, gets set up to allocate
> > at least 2048 + 2 bytes, which gets rounded up by 64 bytes to 2112
> > bytes. this diff makes ix move the reception of packets to the end of
> > the 2112 byte allocation so there's space left at the front of the mbuf.
> > 
> > this in turn makes it more likely that an m_prepend at another point in
> > the system will work without an extra mbuf allocation. eg, if you're
> > bridging or routing between vlans and vlans on svlans somewhere else,
> > this will be a bit faster with this diff.
> > 
> > thoughts? ok?
> 
> I think using m_align() here may be benefitial. Since it does exactly
> that. Apart from that I have to agree, shifting the packet back makes a
> lot of sense.

Like this?

Index: if_ix.c
===================================================================
RCS file: /cvs/src/sys/dev/pci/if_ix.c,v
retrieving revision 1.153
diff -u -p -r1.153 if_ix.c
--- if_ix.c     21 Feb 2019 03:16:47 -0000      1.153
+++ if_ix.c     25 Feb 2019 10:06:59 -0000
@@ -2389,8 +2395,8 @@ ixgbe_get_buf(struct rx_ring *rxr, int i
        if (!mp)
                return (ENOBUFS);
 
+       m_align(mp, sc->rx_mbuf_sz);
        mp->m_len = mp->m_pkthdr.len = sc->rx_mbuf_sz;
-       m_adj(mp, ETHER_ALIGN);
 
        error = bus_dmamap_load_mbuf(rxr->rxdma.dma_tag, rxbuf->map,
            mp, BUS_DMA_NOWAIT);

Reply via email to