Hey Trevor,

Thanks for all the info.  I took a quick look at HADOOP-7276 and
HDFS-1920, haven't gotten a chance for a full review yet but they
don't look like they'll be a burden, and if they get Hadoop running on
ARM that's great!

Thanks,
Eli

On Fri, May 20, 2011 at 4:27 PM, Trevor Robinson <tre...@scurrilous.com> wrote:
> Hi Eli,
>
> On Thu, May 19, 2011 at 1:39 PM, Eli Collins <e...@cloudera.com> wrote:
>> Thanks for contributing.   Supporting ARM on Hadoop will require a
>> number of different changes right? Eg given that Hadoop currently
>> depends on some Sun-specific classes, and requires a Sun-compatible
>> JVM you'll have to work around this dependency somehow, there's not a
>> Sun JVM for ARM right?
>
> Actually, there is a Sun JVM for ARM, and it works quite well:
>
> http://www.oracle.com/technetwork/java/embedded/downloads/index.html
>
> Currently, it's just a JRE, so you have to use another JDK for javac,
> etc., but I'm optimistic that we'll see a Sun Java SE JDK for ARM
> servers one of these days, given all the ARM server activity from
> Calxeda [http://www.theregister.co.uk/2011/03/14/calxeda_arm_server/],
> Marvell, and nVidia
> [http://www.channelregister.co.uk/2011/01/05/nvidia_arm_pc_server_chip/].
>
> With the patches I submitted, Hadoop builds completely and nearly all
> of the Commons and HDFS unit tests pass with OpenJDK on ARM. (Some of
> the Map/Reduce unit tests have some crashes due to a bug in the
> OpenJDK build I'm using.) I need to re-run the unit tests with the Sun
> JRE and see if they pass; other tests/benchmarks have run much faster
> and more reliably with the Sun JRE, so I anticipate better results.
> I've run tests like TestDFSIO with the Sun JRE and have had no
> problems.
>
>> If there's a handful of additional changes then let's make an umbrella
>> jira for Hadoop ARM support and make the issues you've already filed
>> sub-tasks. You can ping me off-line on how to that if you want.
>> Supporting non-x86 processors and non-gcc compilers is an additional
>> maintenance burden on the project so it would be helpful to have an
>> end-game figured out so these patches don't bitrot in the meantime.
>
> I really don't anticipate any additional changes at this point. No
> Java or C++ code changes have been necessary; it's simply removing
> -m32 from CFLAGS/LDFLAGS and adding ARM to the list of processors in
> apsupport.m4 (which contains lots of other unsupported processors
> anyway). And just to be clear, pretty much everyone uses gcc for
> compilation on ARM, so supporting another compiler is unnecessary for
> this.
>
> I certainly don't want to increase maintenance burden at this point,
> especially given that data center-grade ARM servers are still in the
> prototype stage. OTOH, these changes seem pretty trivial to me, and
> allow other developers (particularly those evaluating ARM and those
> involved in the Ubuntu ARM Server 11.10 release this fall:
> https://blueprints.launchpad.net/ubuntu/+spec/server-o-arm-server) to
> get Hadoop up and running without having to patch the build.
>
> I'll follow up offline though, so I can better understand any concerns
> you may still have.
>
> Thanks,
> Trevor
>
>> On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson <tre...@scurrilous.com> 
>> wrote:
>>> Is the native build failing on ARM (where gcc doesn't support -m32) a
>>> known issue, and is there a workaround or fix pending?
>>>
>>> $ ant -Dcompile.native=true
>>> ...
>>>      [exec] make  all-am
>>>      [exec] make[1]: Entering directory
>>> `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
>>>      [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
>>> -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
>>> -I/usr/lib/jvm/java-6-openjdk/include
>>> -I/usr/lib/jvm/java-6-openjdk/include/linux
>>> -I/home/trobinson/dev/hadoop-common/src/native/src
>>> -Isrc/org/apache/hadoop/io/compress/zlib
>>> -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
>>> -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
>>> .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
>>> 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
>>> '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
>>>      [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
>>> -I/home/trobinson/dev/hadoop-common/src/native
>>> -I/usr/lib/jvm/java-6-openjdk/include
>>> -I/usr/lib/jvm/java-6-openjdk/include/linux
>>> -I/home/trobinson/dev/hadoop-common/src/native/src
>>> -Isrc/org/apache/hadoop/io/compress/zlib
>>> -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
>>> -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
>>> .deps/ZlibCompressor.Tpo -c
>>> /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
>>>  -fPIC -DPIC -o .libs/ZlibCompressor.o
>>>      [exec] make[1]: Leaving directory
>>> `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
>>>      [exec] cc1: error: unrecognized command line option "-m32"
>>>      [exec] make[1]: *** [ZlibCompressor.lo] Error 1
>>>      [exec] make: *** [all] Error 2
>>>
>>> This closest issue I can find is
>>> https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
>>> assumes gcc), as well as other issues regarding where and how to
>>> specify -m32/64. However, there doesn't seem to be a specific issue
>>> covering build failure on systems using gcc where the gcc target does
>>> not support -m32/64 (such as ARM).
>>>
>>> I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
>>> when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
>>> my system.) To any maintainers on this list, please let me know if
>>> you'd like me to open a new issue and/or attach this patch to an
>>> issue.
>>>
>>> Thanks,
>>> Trevor
>>>
>>
>

Reply via email to