[jira] [Commented] (HADOOP-8466) hadoop-client POM incorrectly excludes avro

2012-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13287890#comment-13287890
 ] 

Hudson commented on HADOOP-8466:


Integrated in Hadoop-Hdfs-trunk #1064 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1064/])
HADOOP-8466. hadoop-client POM incorrectly excludes avro. (bmahe via tucu) 
(Revision 1345358)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1345358
Files : 
* /hadoop/common/trunk/hadoop-client/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 hadoop-client POM incorrectly excludes avro
 ---

 Key: HADOOP-8466
 URL: https://issues.apache.org/jira/browse/HADOOP-8466
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Bruno Mahé
Assignee: Bruno Mahé
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8466.patch


 avro is used by Serializers initialization, thus it must be there

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8460) Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR

2012-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13287893#comment-13287893
 ] 

Hudson commented on HADOOP-8460:


Integrated in Hadoop-Hdfs-trunk #1064 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1064/])
HADOOP-8460. Document proper setting of HADOOP_PID_DIR and 
HADOOP_SECURE_DN_PID_DIR (bobby) (Revision 1345304)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1345304
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ClusterSetup.apt.vm


 Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR
 --

 Key: HADOOP-8460
 URL: https://issues.apache.org/jira/browse/HADOOP-8460
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 1.2.0, 2.0.1-alpha, 3.0.0

 Attachments: HADOOP-8460-branch-1.txt, HADOOP-8460-branch-1.txt, 
 HADOOP-8460-trunk.txt, HADOOP-8460-trunk.txt


 We should document that in a properly setup cluster HADOOP_PID_DIR and 
 HADOOP_SECURE_DN_PID_DIR should not point to /tmp, but should point to a 
 directory that normal users do not have access to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13287889#comment-13287889
 ] 

Hudson commented on HADOOP-8368:


Integrated in Hadoop-Hdfs-trunk #1064 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1064/])
HADOOP-8368. Use CMake rather than autotools to build native code (abe 
via tucu) (Revision 1345421)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1345421
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/Makefile.in
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure.ac
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h.in
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 

[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13287943#comment-13287943
 ] 

Hudson commented on HADOOP-8368:


Integrated in Hadoop-Mapreduce-trunk #1098 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1098/])
HADOOP-8368. Use CMake rather than autotools to build native code (abe 
via tucu) (Revision 1345421)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1345421
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/.autom4te.cfg
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/Makefile.am
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/acinclude.m4
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/configure.ac
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/lib/Makefile.am
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Compressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/lz4/Lz4Decompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/snappy/org_apache_hadoop_io_compress_snappy.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/Makefile.am
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/ZlibDecompressor.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/compress/zlib/org_apache_hadoop_io_compress_zlib.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org_apache_hadoop.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/Makefile.in
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/configure.ac
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.c
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/system/c++/runAs/runAs.h.in
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/Makefile.am
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/acinclude.m4
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/configure.ac
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/Makefile.am
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/fuse-dfs/src/fuse_dfs.h
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/Makefile.am
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/configure.ac
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apfunctions.m4
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apjava.m4
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/m4/apsupport.m4
* /hadoop/common/trunk/hadoop-hdfs-project/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 

[jira] [Commented] (HADOOP-8460) Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR

2012-06-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13287947#comment-13287947
 ] 

Hudson commented on HADOOP-8460:


Integrated in Hadoop-Mapreduce-trunk #1098 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1098/])
HADOOP-8460. Document proper setting of HADOOP_PID_DIR and 
HADOOP_SECURE_DN_PID_DIR (bobby) (Revision 1345304)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1345304
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ClusterSetup.apt.vm


 Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR
 --

 Key: HADOOP-8460
 URL: https://issues.apache.org/jira/browse/HADOOP-8460
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 1.2.0, 2.0.1-alpha, 3.0.0

 Attachments: HADOOP-8460-branch-1.txt, HADOOP-8460-branch-1.txt, 
 HADOOP-8460-trunk.txt, HADOOP-8460-trunk.txt


 We should document that in a properly setup cluster HADOOP_PID_DIR and 
 HADOOP_SECURE_DN_PID_DIR should not point to /tmp, but should point to a 
 directory that normal users do not have access to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-06-02 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8368:
-

Attachment: HADOOP-8368-b2.001.trimmed.patch
HADOOP-8368-b2.001.rm.patch
HADOOP-8368-b2.001.patch

uploading branch-2 version (the merge conflicts seem to have been trivial)

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, 
 HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368.001.patch, 
 HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, 
 HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, 
 HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, 
 HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, 
 HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-06-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13288033#comment-13288033
 ] 

Hadoop QA commented on HADOOP-8368:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12530662/HADOOP-8368-b2.001.trimmed.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1072//console

This message is automatically generated.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, 
 HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368.001.patch, 
 HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, 
 HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, 
 HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, 
 HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, 
 HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8450) Remove src/test/system

2012-06-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8450:


 Component/s: test
Target Version/s: 2.0.1-alpha
Assignee: Eli Collins
 Summary: Remove src/test/system  (was: tidy up 
src/test/system/c++/runAs)

Agree. Doesn't build or run on trunk or branch-2. Let's remove it.

 Remove src/test/system
 --

 Key: HADOOP-8450
 URL: https://issues.apache.org/jira/browse/HADOOP-8450
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Eli Collins
Priority: Trivial
 Attachments: hadoop-8450.txt


 runAs is a binary in the hadoop-common project.  It seems to allow you to run 
 a hadoop daemon as another user.  However, it requires setuid privileges in 
 order to do this.
 * Do we still need this binary?  Who is using it and are there better 
 solutions out there?  My understanding is that setuid binaries are something 
 we generally try to avoid.
 * If we do need it, can we add some documentation about it somewhere?
 * Should this binary be in the src/test directory?  It doesn't seem to be 
 testing anything.
 Hopefully that covers everything...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8430) Backport new FileSystem methods introduced by HADOOP-8014 to branch-1

2012-06-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8430:


Attachment: hadoop-8430-1.txt

Patch attached. test patch output coming.

 Backport new FileSystem methods introduced by HADOOP-8014 to branch-1 
 --

 Key: HADOOP-8430
 URL: https://issues.apache.org/jira/browse/HADOOP-8430
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hadoop-8430-1.txt


 Per HADOOP-8422 let's backport the new FileSystem methods from HADOOP-8014 to 
 branch-1 so users can transition over in Hadoop 1.x releases, which helps 
 upstream projects like HBase work against federation (see HBASE-6067). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8430) Backport new FileSystem methods introduced by HADOOP-8014 to branch-1

2012-06-02 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13288044#comment-13288044
 ] 

Eli Collins commented on HADOOP-8430:
-

Forgot to mention, this patch both introduces the new methods and per 
HADOOP-8422 deprecates the ones that don't take a path to help people 
transition from Hadoop 1.x.

 Backport new FileSystem methods introduced by HADOOP-8014 to branch-1 
 --

 Key: HADOOP-8430
 URL: https://issues.apache.org/jira/browse/HADOOP-8430
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hadoop-8430-1.txt


 Per HADOOP-8422 let's backport the new FileSystem methods from HADOOP-8014 to 
 branch-1 so users can transition over in Hadoop 1.x releases, which helps 
 upstream projects like HBase work against federation (see HBASE-6067). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8450) Remove src/test/system

2012-06-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8450:


Status: Patch Available  (was: Open)

 Remove src/test/system
 --

 Key: HADOOP-8450
 URL: https://issues.apache.org/jira/browse/HADOOP-8450
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Eli Collins
Priority: Trivial
 Attachments: hadoop-8450.txt


 runAs is a binary in the hadoop-common project.  It seems to allow you to run 
 a hadoop daemon as another user.  However, it requires setuid privileges in 
 order to do this.
 * Do we still need this binary?  Who is using it and are there better 
 solutions out there?  My understanding is that setuid binaries are something 
 we generally try to avoid.
 * If we do need it, can we add some documentation about it somewhere?
 * Should this binary be in the src/test directory?  It doesn't seem to be 
 testing anything.
 Hopefully that covers everything...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8430) Backport new FileSystem methods introduced by HADOOP-8014 to branch-1

2012-06-02 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13288046#comment-13288046
 ] 

Eli Collins commented on HADOOP-8430:
-

{noformat}
 [exec] 
 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no tests are needed for 
this patch.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] -1 findbugs.  The patch appears to introduce 8 new Findbugs 
(version 1.3.9) warnings.
 [exec] 
 [exec] 
{noformat}

Findbugs warnings are HADOOP-7847. The new method just calls the old one so no 
new test is necessary.

 Backport new FileSystem methods introduced by HADOOP-8014 to branch-1 
 --

 Key: HADOOP-8430
 URL: https://issues.apache.org/jira/browse/HADOOP-8430
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hadoop-8430-1.txt


 Per HADOOP-8422 let's backport the new FileSystem methods from HADOOP-8014 to 
 branch-1 so users can transition over in Hadoop 1.x releases, which helps 
 upstream projects like HBase work against federation (see HBASE-6067). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira