Good find. I filed HADOOP-11505 to fix the incorrect usage of unoptimized code on x86 and the incorrect bswap on alternative architectures.
Let's address the fmemcmp stuff in a separate jira. best, Colin On Thu, Jan 22, 2015 at 11:34 AM, Edward Nevill <edward.nev...@linaro.org> wrote: > On 21 January 2015 at 11:42, Edward Nevill <edward.nev...@linaro.org> wrote: > >> Hi, >> >> Hadoop currently does not build on ARM AARCH64. I have raised a JIRA issue >> with a patch. >> >> https://issues.apache.org/jira/browse/HADOOP-11484 >> >> I have submitted the patch and it builds OK and passes all the core tests. >> > > Hi Colin, > > Thanks for pushing this patch. Steve Loughran raised the issue in the card > that although this patch fixes the ARM issue it does nothing for other > archs. > > I would be happy to prepare a patch which makes it downgrade to C code on > other CPU families if this would be useful. > > The general format would be > > #idef __aarch64__ > __asm__("ARM Asm") > #elif defined(??X86??) > __asm__("X86 Asm") > #else > C Implementation > #endif > > My question is what to put for the defined(??X86??) > > According to the following page > > http://nadeausoftware.com/articles/2012/02/c_c_tip_how_detect_processor_type_using_compiler_predefined_macros > > the only way to fully detect all x86 variants is to write > > #if defined(__x86_64__) || defined(_M_X64) || defined(__i386) || > defined(_M_IX86) > > will detect all variants of 32 and 64 bit x86 across gcc and windows. > > Interestingly the bswap64 inline function in primitives.h has the following > > #ifdef __X64 > __asm__("rev ...."); > #else > C implementation > #endif > > However if I compile Hadoop on my 64 bit Red Hat Enterprise Linux system it > actually compiles the C implementation (I have verified this by putting a > #error at the start of the C implementation. This is because the correct > macro to detect 64 bit x86 on gcc is __x86_64__ I had also thought that the > macro for windows was _M_X64 not __X64 but maybe __X64 works just as well > on windows? Perhaps someone with access to windows development platform > could do some tests and tell us what macros actually work. > > Another question is whether we actually care about 32 bit platforms, or can > they just all downgrade to C code. Does anyone actually build Hadoop on a > 32 bit platform? > > Another thing to be aware of is that there are endian dependncies in > primitives.h, for example in fmemcmp() just a bit further down is the line > > return (int64_t)bswap(*(uint32_t*)src) - > (int64_t)bswap(*(uint32_t*)dest); > > This is little endian dependant so will work on the likes of X86 and ARM > but will fail on Sparc. Note, I haven't trawled looking for endian > dependancies but this was one I just spotted while looking at the aarch64 > non compilation issue. > > All the best, > Ed.