[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13271904#comment-13271904
 ] 

Hadoop QA commented on HADOOP-8368:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12526229/HADOOP-8368.001.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified 
tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    -1 javac.  The applied patch generated 1937 javac compiler warnings (more 
than the trunk's current 1934 warnings).

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/972//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/972//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/972//console

This message is automatically generated.
                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-8368.001.patch
>
>
> It would be good to use cmake rather than autotools to build the native 
> (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on 
> different operating systems.  It would be extremely difficult, and perhaps 
> impossible, to use autotools under Windows.  Even if it were possible, it 
> might require horrible workarounds like installing cygwin.  Even on Linux 
> variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
> the Dash shell, rather than the Bash shell as it is in other Linux versions.  
> It is currently impossible to build the native code under Ubuntu 12.04 
> because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use 
> shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: 
> cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method 
> "path" via package "Autom4te..." are common error messages.  In order to even 
> start debugging automake problems you need to learn shell, m4, sed, and the a 
> bunch of other things.  With CMake, all you have to learn is the syntax of 
> CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required 
> libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For 
> example, the version installed under openSUSE defaults to putting libraries 
> in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
> to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
> build is currently broken when using OpenSUSE.)  This is another source of 
> build failures and complexity.  If things go wrong, you will often get an 
> error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a 
> particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
> backwards compatibility between different versions.  This prevents build bugs 
> due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
> build time.
> For all these reasons, I think we should switch to CMake for compiling native 
> (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to