On 23/11/14 17:55, Simon McVittie wrote: > Unfortunately, on my x86-64 laptop, my patched liblzo2 with > -DLZO_CFG_NO_UNALIGNED on all architectures seems to be half as fast as > the unpatched one [...] > I'm trying out a slightly different approach: keeping the unaligned > accesses via casts like *(uint16_t *) on architectures where lzodefs.h > specifically allows them, but disabling the casts via > struct { char[n] } conditional on alignof(that struct) == 1, which seem > to be the problematic ones.
That fixed the performance regression on amd64 while still working correctly on armv5tel, so I've uploaded it as a DELAYED/7 NMU. See https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=757037 for nmudiff. If anyone has better ideas, I'm happy to cancel the delayed upload and let someone take over fixing the bug. S -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/54725b89.5020...@debian.org