Re: always align data in m_pullup on all archs

2022-02-04 Thread David Gwynne
On Fri, Feb 04, 2022 at 11:39:56AM +0100, Alexander Bluhm wrote:
> On Fri, Feb 04, 2022 at 07:32:52PM +1000, David Gwynne wrote:
> > as discussed in "m_pullup alingment crash armv7 sparc64", at worst it
> > doesnt hurt to have m_pullup maintain the data alignment of payloads,
> > and at best it will encourage aligned loads even if the arch allows
> > unaligned accesses. aligned loads are faster than unaligned.
> >
> > ok?
> 
> adj is unsigned int, so assigning a mtod(m0, unsigned long) looks
> strange.  Of course the higher bits are cut off anyway, but an
> explicit & is clearer than a assingment with different types.
> 
> Please use
> 
>   adj = mtod(m, unsigned long) & (sizeof(long) - 1);
>   adj = mtod(m0, unsigned long) & (sizeof(long) - 1);
> 
> otherwise OK bluhm@

it's been pointed out to me that ALIGNBYTES is actually very
conservative about alignment rather than optimistic. ie, it's at least
sizeof(long) - 1, but can be bigger. i think i've been confusing
it with ALIGNED_POINTER.

> 
> > Index: uipc_mbuf.c
> > ===
> > RCS file: /cvs/src/sys/kern/uipc_mbuf.c,v
> > retrieving revision 1.280
> > diff -u -p -r1.280 uipc_mbuf.c
> > --- uipc_mbuf.c 18 Jan 2022 12:38:21 -  1.280
> > +++ uipc_mbuf.c 4 Feb 2022 09:30:02 -
> > @@ -945,9 +945,11 @@ m_pullup(struct mbuf *m0, int len)
> > goto freem0;
> > }
> >
> > -   adj = mtod(m, unsigned long) & ALIGNBYTES;
> > +   adj = mtod(m, unsigned long);
> > } else
> > -   adj = mtod(m0, unsigned long) & ALIGNBYTES;
> > +   adj = mtod(m0, unsigned long);
> > +
> > +   adj &= sizeof(long) - 1;
> >
> > tail = head + M_SIZE(m0);
> > head += adj;



Re: always align data in m_pullup on all archs

2022-02-04 Thread Alexander Bluhm
On Fri, Feb 04, 2022 at 07:32:52PM +1000, David Gwynne wrote:
> as discussed in "m_pullup alingment crash armv7 sparc64", at worst it
> doesnt hurt to have m_pullup maintain the data alignment of payloads,
> and at best it will encourage aligned loads even if the arch allows
> unaligned accesses. aligned loads are faster than unaligned.
> 
> ok?

adj is unsigned int, so assigning a mtod(m0, unsigned long) looks
strange.  Of course the higher bits are cut off anyway, but an
explicit & is clearer than a assingment with different types.

Please use

adj = mtod(m, unsigned long) & (sizeof(long) - 1);
adj = mtod(m0, unsigned long) & (sizeof(long) - 1);

otherwise OK bluhm@

> Index: uipc_mbuf.c
> ===
> RCS file: /cvs/src/sys/kern/uipc_mbuf.c,v
> retrieving revision 1.280
> diff -u -p -r1.280 uipc_mbuf.c
> --- uipc_mbuf.c   18 Jan 2022 12:38:21 -  1.280
> +++ uipc_mbuf.c   4 Feb 2022 09:30:02 -
> @@ -945,9 +945,11 @@ m_pullup(struct mbuf *m0, int len)
>   goto freem0;
>   }
>  
> - adj = mtod(m, unsigned long) & ALIGNBYTES;
> + adj = mtod(m, unsigned long);
>   } else
> - adj = mtod(m0, unsigned long) & ALIGNBYTES;
> + adj = mtod(m0, unsigned long);
> +
> + adj &= sizeof(long) - 1;
>  
>   tail = head + M_SIZE(m0);
>   head += adj;