On Wed, Apr 29, 2015 at 03:38:22PM -0600, David Ahern wrote:

> >And dealing with the fairly few resulting changes..
> 
> Confused. That does not deal with the alignment problem. Internal to
> cm_mask_copy unsigned longs are used (8-bytes), so why change the
> signature to u32?

You'd change the loop stride to by u32 as well.

This whole thing is just an attempted optimization, but doing copy and
mask 8 bytes at a time on unaligned data is not very efficient, even
on x86.

So either drop the optimization and use u8 as the stride.

Or keep the optimization and guarentee alignment, the best we can do
is u32.

Since this is an optimization, get_unaligned should be avoided,
looping over u8 would be faster.

Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to