Ævar Arnfjörð Bjarmason <[email protected]> writes:
> Because in the current code is, abbreviated:
>
> #if (defined(_BYTE_ORDER) || defined(__BYTE_ORDER) ||
> defined(__BYTE_ORDER__))
> #if /* byte order is bendian */
> #define SHA1DC_BIGENDIAN
> #endif
> #else
> #if /*some processor tests/* || defined(__sparc))
> #define SHA1DC_BIGENDIAN
> #endif
>
> And since Solaris defines _BYTE_ORDER we never get to checking __sparc,
> and in fact the "/* byte order is bendian */" test errors out.
>
> This is fixed by my patch, where we first check gcc settings, then
> glibc, then processors, and finally _BYTE_ORDER (but as discussed that
> last bit could be adjusted to sun && _BYTE_ORDER, if we can find what
> "sun" is.
Well, if Solaris defines _BYTE_ORDER, doesn't that mean they define
two constants _BIG_ENDIAN and _LITTLE_ENDIAN to compare it with? If
that is the case, I suspect that the change to make "comparison
between __BYTE_ORDER and __BIG_ENDIAN for GCC only" is going in a
wrong direction, as it wants to take the same approach as GCC, but
just uses a slightly different symbol.
I wonder if the approach like the following might be cleaner to
extend as we find other oddball platforms.
#undef __SHA1DC_BYTE_ORDER
#if defined(_BYTE_ORDER)
#define __SHA1DC_BYTE_ORDER _BYTE_ORDER
#elif defined(__BYTE_ORDER)
#define __SHA1DC_BYTE_ORDER __BYTE_ORDER
#elif defined(__BYTE_ORDER__))
#define __SHA1DC_BYTE_ORDER __BYTE_ORDER__
#endif
#ifdef __SHA1DC_BYTE_ORDER
#undef __SHA1DC_BIG_ENDIAN
/* do the same for variations of BIG_ENDIAN constant */
#if defined(_BIG_ENDIAN)
...
#endif
#if __SHA1DC_BYTE_ORDER == __SHA1DC_BIG_ENDIAN
#define SHA1DC_BIGENDIAN
#endif
#else
/*
* as the platform does not use "compare BYTE-ORDER with
* BIG_ENDIAN macro" strategy, defined-ness of BIG_ENDIAN
* may be usable as a sign that it is a big-endian box.
*/
#endif