On 2023/08/16 23:55, Christian Weisgerber wrote:
> Stuart Henderson:
> 
> > +-#  if defined(__GNUC__) && !(defined(__ARM_ARCH) && __ARM_ARCH < 7 && 
> > defined(__ARM_FEATURE_UNALIGNED))
> > ++#  if defined(__GNUC__) && !(defined(__ARM_ARCH) && __ARM_ARCH < 7 && 
> > defined(__ARM_FEATURE_UNALIGNED)) && !defined(__sparc64__)
> 
> This should include <endian.h> and check __STRICT_ALIGNMENT rather than
> hardcoding architectures.

...which would look something like the diff below (trying to keep the
added #include so that it's only used in as few cases as possible).

No idea if this is actually needed on other strict alignment archs
though, it does sound like upstream weren't expecting this to be broken...
from 
https://fastcompression.blogspot.com/2015/08/accessing-unaligned-memory.html
it seems like setting to 1 may be mostly intended as a speed optimisation
for armv7, and this ifdef intended to prevent it making things worse for
armv6. But it's been a while, perhaps armv7 compilers are better by now,
so maybe we're better off just disabling the #define XXH_FORCE_MEMORY_ACCESS 1.

I don't have any affected hw to test with.

Index: Makefile
===================================================================
RCS file: /cvs/ports/sysutils/xxhash/Makefile,v
retrieving revision 1.13
diff -u -p -r1.13 Makefile
--- Makefile    23 Jul 2023 04:29:44 -0000      1.13
+++ Makefile    16 Aug 2023 23:56:04 -0000
@@ -3,6 +3,7 @@ COMMENT =       extremely fast non-cryptograph
 GH_ACCOUNT =   Cyan4973
 GH_PROJECT =   xxHash
 GH_TAGNAME =   v0.8.2
+REVISION =     0
 PKGNAME =      ${DISTNAME:L}
 
 SHARED_LIBS =  xxhash 0.3      # 0.8.1
Index: patches/patch-xxhash_h
===================================================================
RCS file: patches/patch-xxhash_h
diff -N patches/patch-xxhash_h
--- /dev/null   1 Jan 1970 00:00:00 -0000
+++ patches/patch-xxhash_h      16 Aug 2023 23:56:04 -0000
@@ -0,0 +1,17 @@
+Index: xxhash.h
+--- xxhash.h.orig
++++ xxhash.h
+@@ -1983,8 +1983,11 @@ XXH3_128bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3
+    /* prefer __packed__ structures (method 1) for GCC
+     * < ARMv7 with unaligned access (e.g. Raspbian armhf) still uses byte 
shifting, so we use memcpy
+     * which for some reason does unaligned loads. */
+-#  if defined(__GNUC__) && !(defined(__ARM_ARCH) && __ARM_ARCH < 7 && 
defined(__ARM_FEATURE_UNALIGNED))
+-#    define XXH_FORCE_MEMORY_ACCESS 1
++#  if defined(__GNUC__)
++#    include <endian.h>   /* __STRICT_ALIGNMENT */
++#    if !defined(__STRICT_ALIGNMENT)
++#      define XXH_FORCE_MEMORY_ACCESS 1
++#    endif
+ #  endif
+ #endif
+ 

Reply via email to