[jira] [Commented] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426344#comment-13426344 ] Alejandro Abdelnur commented on HADOOP-8480: I've applied the patch and run *mvn clean test -DskipTests -Pnative* and HDFS native testcases still run. I've tested this in Ubuntu 10.4. The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Attachments: HADOOP-8480.001.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8620) Add -Drequire.fuse and -Drequire.snappy
[ https://issues.apache.org/jira/browse/HADOOP-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426349#comment-13426349 ] Alejandro Abdelnur commented on HADOOP-8620: I've tried the *mvn clean test -DskipTests -Drequire.snappy=true -Dsnappy.prefix=/home/tucu/src/snappy-1.0.3/build/usr/local -Pnative* on Ubuntu 10.4 and the build is failing and I have snappy in the specified directory: {code} /home/tucu/src/snappy-1.0.3/build/usr/local/ /home/tucu/src/snappy-1.0.3/build/usr/local/share /home/tucu/src/snappy-1.0.3/build/usr/local/share/doc /home/tucu/src/snappy-1.0.3/build/usr/local/share/doc/snappy /home/tucu/src/snappy-1.0.3/build/usr/local/share/doc/snappy/INSTALL /home/tucu/src/snappy-1.0.3/build/usr/local/share/doc/snappy/ChangeLog /home/tucu/src/snappy-1.0.3/build/usr/local/share/doc/snappy/COPYING /home/tucu/src/snappy-1.0.3/build/usr/local/share/doc/snappy/README /home/tucu/src/snappy-1.0.3/build/usr/local/share/doc/snappy/format_description.txt /home/tucu/src/snappy-1.0.3/build/usr/local/share/doc/snappy/NEWS /home/tucu/src/snappy-1.0.3/build/usr/local/include /home/tucu/src/snappy-1.0.3/build/usr/local/include/snappy.h /home/tucu/src/snappy-1.0.3/build/usr/local/include/snappy-stubs-public.h /home/tucu/src/snappy-1.0.3/build/usr/local/include/snappy-c.h /home/tucu/src/snappy-1.0.3/build/usr/local/include/snappy-sinksource.h /home/tucu/src/snappy-1.0.3/build/usr/local/lib /home/tucu/src/snappy-1.0.3/build/usr/local/lib/libsnappy.a /home/tucu/src/snappy-1.0.3/build/usr/local/lib/libsnappy.so /home/tucu/src/snappy-1.0.3/build/usr/local/lib/libsnappy.so.1 /home/tucu/src/snappy-1.0.3/build/usr/local/lib/libsnappy.la /home/tucu/src/snappy-1.0.3/build/usr/local/lib/libsnappy.so.1.1.1 {code} Add -Drequire.fuse and -Drequire.snappy --- Key: HADOOP-8620 URL: https://issues.apache.org/jira/browse/HADOOP-8620 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.1-alpha Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HADOOP-8620.002.patch We have some optional build components which don't get built if they are not installed on the build machine. One of those components is fuse_dfs. Another is the snappy support in libhadoop.so. Unfortunately, since these components are silently ignored if they are not present, it's easy to unintentionally create an incomplete build. We should add two flags, -Drequire.fuse and -Drequire.snappy, that do exactly what the names suggest. This will make the build more repeatable. Those who want a complete build can specify these system properties to maven. If the build cannot be created as requested, it will be a hard error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8370) Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found
[ https://issues.apache.org/jira/browse/HADOOP-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426571#comment-13426571 ] Hudson commented on HADOOP-8370: Integrated in Hadoop-Hdfs-0.23-Build #331 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/331/]) merge -r 1367764:1367765 from branch-2. FIXES: HADOOP-8370 (Revision 1367766) Result = SUCCESS tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367766 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/pom.xml Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found Key: HADOOP-8370 URL: https://issues.apache.org/jira/browse/HADOOP-8370 Project: Hadoop Common Issue Type: Bug Components: native Affects Versions: 0.23.1 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600) Maven home: /usr/local/apache-maven-3.0.4 Java version: 1.7.0_04, vendor: Oracle Corporation Java home: /usr/lib/jvm/jdk1.7.0_04/jre Default locale: en_US, platform encoding: ISO-8859-1 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix Reporter: Trevor Robinson Assignee: Trevor Robinson Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8370.patch [INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common --- [INFO] /bin/sh -c cd /build/hadoop-common/hadoop-common-project/hadoop-common /usr/lib/jvm/jdk1.7.0_02/bin/javah -d /build/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah -classpath ... org.apache.hadoop.io.compress.zlib.ZlibDecompressor org.apache.hadoop.security.JniBasedUnixGroupsMapping org.apache.hadoop.io.nativeio.NativeIO org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping org.apache.hadoop.io.compress.snappy.SnappyCompressor org.apache.hadoop.io.compress.snappy.SnappyDecompressor org.apache.hadoop.io.compress.lz4.Lz4Compressor org.apache.hadoop.io.compress.lz4.Lz4Decompressor org.apache.hadoop.util.NativeCrc32 Cannot find annotation method 'value()' in type 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class file for org.apache.hadoop.classification.InterfaceAudience not found Cannot find annotation method 'value()' in type 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate' Error: cannot access org.apache.hadoop.classification.InterfaceStability class file for org.apache.hadoop.classification.InterfaceStability not found The fix for me was to changing the scope of hadoop-annotations from provided to compile in pom.xml: dependency groupIdorg.apache.hadoop/groupId artifactIdhadoop-annotations/artifactId scopecompile/scope /dependency For some reason, it was the only dependency with scope provided. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8637) FilterFileSystem#setWriteChecksum is broken
[ https://issues.apache.org/jira/browse/HADOOP-8637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426568#comment-13426568 ] Hudson commented on HADOOP-8637: Integrated in Hadoop-Hdfs-0.23-Build #331 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/331/]) svn merge -c 1367702 FIXES: HADOOP-8637. FilterFileSystem#setWriteChecksum is broken (daryn via bobby) (Revision 1367705) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367705 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java FilterFileSystem#setWriteChecksum is broken --- Key: HADOOP-8637 URL: https://issues.apache.org/jira/browse/HADOOP-8637 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8637.patch {{FilterFileSystem#setWriteChecksum}} is being passed through as {{fs.setVERIFYChecksum}}. Example of impact is checksums cannot be disabled for LFS if a filter fs (like {{ChRootedFileSystem}}) is applied. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8370) Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found
[ https://issues.apache.org/jira/browse/HADOOP-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426586#comment-13426586 ] Hudson commented on HADOOP-8370: Integrated in Hadoop-Hdfs-trunk #1122 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1122/]) HADOOP-8370. Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found (Trevor Robinson via tgraves) (Revision 1367764) Result = FAILURE tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367764 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found Key: HADOOP-8370 URL: https://issues.apache.org/jira/browse/HADOOP-8370 Project: Hadoop Common Issue Type: Bug Components: native Affects Versions: 0.23.1 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600) Maven home: /usr/local/apache-maven-3.0.4 Java version: 1.7.0_04, vendor: Oracle Corporation Java home: /usr/lib/jvm/jdk1.7.0_04/jre Default locale: en_US, platform encoding: ISO-8859-1 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix Reporter: Trevor Robinson Assignee: Trevor Robinson Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8370.patch [INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common --- [INFO] /bin/sh -c cd /build/hadoop-common/hadoop-common-project/hadoop-common /usr/lib/jvm/jdk1.7.0_02/bin/javah -d /build/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah -classpath ... org.apache.hadoop.io.compress.zlib.ZlibDecompressor org.apache.hadoop.security.JniBasedUnixGroupsMapping org.apache.hadoop.io.nativeio.NativeIO org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping org.apache.hadoop.io.compress.snappy.SnappyCompressor org.apache.hadoop.io.compress.snappy.SnappyDecompressor org.apache.hadoop.io.compress.lz4.Lz4Compressor org.apache.hadoop.io.compress.lz4.Lz4Decompressor org.apache.hadoop.util.NativeCrc32 Cannot find annotation method 'value()' in type 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class file for org.apache.hadoop.classification.InterfaceAudience not found Cannot find annotation method 'value()' in type 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate' Error: cannot access org.apache.hadoop.classification.InterfaceStability class file for org.apache.hadoop.classification.InterfaceStability not found The fix for me was to changing the scope of hadoop-annotations from provided to compile in pom.xml: dependency groupIdorg.apache.hadoop/groupId artifactIdhadoop-annotations/artifactId scopecompile/scope /dependency For some reason, it was the only dependency with scope provided. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8633) Interrupted FsShell copies may leave tmp files
[ https://issues.apache.org/jira/browse/HADOOP-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426601#comment-13426601 ] Daryn Sharp commented on HADOOP-8633: - The {{TargetFileSystem}} is just a shim over a real filesystem that is registering and canceling temp paths for deletion. The shim simplifies all the code performing the copy and helps ensure the temp files are cancelled and/or deleted immediately. I originally did what you suggest and I wound up with multiple nested try blocks and conditions that made the code (imho) harder to read, understand, and difficult to test. It's true that a call to {{System.exit}} won't cleanup the filesystem, but it's only called as the last line in {{main}}. Calling it in other places would break functionality and be a bug. (I did manually run a copy with 100 iterations and pounded on control-c and no remnants where left) Interrupted FsShell copies may leave tmp files -- Key: HADOOP-8633 URL: https://issues.apache.org/jira/browse/HADOOP-8633 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Attachments: HADOOP-8633.patch Interrupting a copy, ex. via SIGINT, may cause tmp files to not be removed. If the user is copying large files then the remnants will eat into the user's quota. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8633) Interrupted FsShell copies may leave tmp files
[ https://issues.apache.org/jira/browse/HADOOP-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Graves updated HADOOP-8633: -- Resolution: Fixed Fix Version/s: 2.2.0-alpha 3.0.0 0.23.3 Status: Resolved (was: Patch Available) I went ahead and committed this. Thanks Daryn, Bobby, and Kihwal! Interrupted FsShell copies may leave tmp files -- Key: HADOOP-8633 URL: https://issues.apache.org/jira/browse/HADOOP-8633 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8633.patch Interrupting a copy, ex. via SIGINT, may cause tmp files to not be removed. If the user is copying large files then the remnants will eat into the user's quota. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8637) FilterFileSystem#setWriteChecksum is broken
[ https://issues.apache.org/jira/browse/HADOOP-8637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426645#comment-13426645 ] Hudson commented on HADOOP-8637: Integrated in Hadoop-Mapreduce-trunk #1154 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1154/]) HADOOP-8637. FilterFileSystem#setWriteChecksum is broken (daryn via bobby) (Revision 1367702) Result = FAILURE bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367702 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java FilterFileSystem#setWriteChecksum is broken --- Key: HADOOP-8637 URL: https://issues.apache.org/jira/browse/HADOOP-8637 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8637.patch {{FilterFileSystem#setWriteChecksum}} is being passed through as {{fs.setVERIFYChecksum}}. Example of impact is checksums cannot be disabled for LFS if a filter fs (like {{ChRootedFileSystem}}) is applied. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8633) Interrupted FsShell copies may leave tmp files
[ https://issues.apache.org/jira/browse/HADOOP-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426652#comment-13426652 ] Hudson commented on HADOOP-8633: Integrated in Hadoop-Hdfs-trunk-Commit #2612 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2612/]) HADOOP-8633. Interrupted FsShell copies may leave tmp files (Daryn Sharp via tgraves) (Revision 1368002) Result = SUCCESS tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368002 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCopy.java Interrupted FsShell copies may leave tmp files -- Key: HADOOP-8633 URL: https://issues.apache.org/jira/browse/HADOOP-8633 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8633.patch Interrupting a copy, ex. via SIGINT, may cause tmp files to not be removed. If the user is copying large files then the remnants will eat into the user's quota. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8633) Interrupted FsShell copies may leave tmp files
[ https://issues.apache.org/jira/browse/HADOOP-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426655#comment-13426655 ] Hudson commented on HADOOP-8633: Integrated in Hadoop-Common-trunk-Commit #2547 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2547/]) HADOOP-8633. Interrupted FsShell copies may leave tmp files (Daryn Sharp via tgraves) (Revision 1368002) Result = SUCCESS tgraves : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368002 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/PathData.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCopy.java Interrupted FsShell copies may leave tmp files -- Key: HADOOP-8633 URL: https://issues.apache.org/jira/browse/HADOOP-8633 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8633.patch Interrupting a copy, ex. via SIGINT, may cause tmp files to not be removed. If the user is copying large files then the remnants will eat into the user's quota. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8225) DistCp fails when invoked by Oozie
[ https://issues.apache.org/jira/browse/HADOOP-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HADOOP-8225: Attachment: HADOOP-8225.patch I think this one-liner should fix the problem, but it's completely untested. I think the whole token file handling needs to analyzed and cleaned up on a separate jira. DistCp fails when invoked by Oozie -- Key: HADOOP-8225 URL: https://issues.apache.org/jira/browse/HADOOP-8225 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.1 Reporter: Mithun Radhakrishnan Attachments: HADOOP-8225.patch, HADOOP-8225.patch, HADOOP-8225.patch When DistCp is invoked through a proxy-user (e.g. through Oozie), the delegation-token-store isn't picked up by DistCp correctly. One sees failures such as: ERROR [main] org.apache.hadoop.tools.DistCp: Couldn't complete DistCp operation: java.lang.SecurityException: Intercepted System.exit(-999) at org.apache.oozie.action.hadoop.LauncherSecurityManager.checkExit(LauncherMapper.java:651) at java.lang.Runtime.exit(Runtime.java:88) at java.lang.System.exit(System.java:904) at org.apache.hadoop.tools.DistCp.main(DistCp.java:357) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:394) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:399) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:147) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:142) Looking over the DistCp code, one sees that HADOOP_TOKEN_FILE_LOCATION isn't being copied to mapreduce.job.credentials.binary, in the job-conf. I'll post a patch for this shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8225) DistCp fails when invoked by Oozie
[ https://issues.apache.org/jira/browse/HADOOP-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426699#comment-13426699 ] Hadoop QA commented on HADOOP-8225: --- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538787/HADOOP-8225.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app. +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1242//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1242//console This message is automatically generated. DistCp fails when invoked by Oozie -- Key: HADOOP-8225 URL: https://issues.apache.org/jira/browse/HADOOP-8225 Project: Hadoop Common Issue Type: Bug Affects Versions: 0.23.1 Reporter: Mithun Radhakrishnan Attachments: HADOOP-8225.patch, HADOOP-8225.patch, HADOOP-8225.patch When DistCp is invoked through a proxy-user (e.g. through Oozie), the delegation-token-store isn't picked up by DistCp correctly. One sees failures such as: ERROR [main] org.apache.hadoop.tools.DistCp: Couldn't complete DistCp operation: java.lang.SecurityException: Intercepted System.exit(-999) at org.apache.oozie.action.hadoop.LauncherSecurityManager.checkExit(LauncherMapper.java:651) at java.lang.Runtime.exit(Runtime.java:88) at java.lang.System.exit(System.java:904) at org.apache.hadoop.tools.DistCp.main(DistCp.java:357) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:394) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:399) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:147) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:142) Looking over the DistCp code, one sees that HADOOP_TOKEN_FILE_LOCATION isn't being copied to mapreduce.job.credentials.binary, in the job-conf. I'll post a patch for this shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426759#comment-13426759 ] Alejandro Abdelnur commented on HADOOP-8480: After chatting with Colin offline I've tried changing the link /bin/sh from /bin/dash to /bin/bash and it works. Still we have to get this working in vanilla Ubuntu as we cannot assume we can tweak /bin/sh in the build machines. The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Attachments: HADOOP-8480.001.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8620) Add -Drequire.fuse and -Drequire.snappy
[ https://issues.apache.org/jira/browse/HADOOP-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HADOOP-8620: - Attachment: HADOOP-8620.003.patch * remove -Dbundle.snappy because we don't support it. This has been obsolete for a long, long time. * remove runas.home. runas was removed by HADOOP-8450, but we forgot to remove runas.home from the pom.xml file. * fix -Dsnappy.prefix, -Dsnappy.lib, -Dsnappy.include. They were broken by HADOOP-8368. Add -Drequire.fuse and -Drequire.snappy --- Key: HADOOP-8620 URL: https://issues.apache.org/jira/browse/HADOOP-8620 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.1-alpha Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HADOOP-8620.002.patch, HADOOP-8620.003.patch We have some optional build components which don't get built if they are not installed on the build machine. One of those components is fuse_dfs. Another is the snappy support in libhadoop.so. Unfortunately, since these components are silently ignored if they are not present, it's easy to unintentionally create an incomplete build. We should add two flags, -Drequire.fuse and -Drequire.snappy, that do exactly what the names suggest. This will make the build more repeatable. Those who want a complete build can specify these system properties to maven. If the build cannot be created as requested, it will be a hard error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Patrick McCabe updated HADOOP-8480: - Attachment: HADOOP-8480.002.patch * don't use double equals sign (it only works on bash, not dash). Single equals sign is actually the right thing to use here in both bash and dash. The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Attachments: HADOOP-8480.001.patch, HADOOP-8480.002.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426830#comment-13426830 ] Alejandro Abdelnur commented on HADOOP-8480: +1, tested in ubuntu with the default sh-dash and it works as expected. regarding Andy's comment on -DskipTests=foo bar, not sure that will every happen, so I wouldn't worry about it. The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Attachments: HADOOP-8480.001.patch, HADOOP-8480.002.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8620) Add -Drequire.fuse and -Drequire.snappy
[ https://issues.apache.org/jira/browse/HADOOP-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426836#comment-13426836 ] Alejandro Abdelnur commented on HADOOP-8620: +1. run build requiring with/without snappy, works as advertised. Add -Drequire.fuse and -Drequire.snappy --- Key: HADOOP-8620 URL: https://issues.apache.org/jira/browse/HADOOP-8620 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.1-alpha Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HADOOP-8620.002.patch, HADOOP-8620.003.patch We have some optional build components which don't get built if they are not installed on the build machine. One of those components is fuse_dfs. Another is the snappy support in libhadoop.so. Unfortunately, since these components are silently ignored if they are not present, it's easy to unintentionally create an incomplete build. We should add two flags, -Drequire.fuse and -Drequire.snappy, that do exactly what the names suggest. This will make the build more repeatable. Those who want a complete build can specify these system properties to maven. If the build cannot be created as requested, it will be a hard error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8608) Add Configuration API for parsing time durations
[ https://issues.apache.org/jira/browse/HADOOP-8608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426844#comment-13426844 ] Kannan Muthukkaruppan commented on HADOOP-8608: --- +1 Add Configuration API for parsing time durations Key: HADOOP-8608 URL: https://issues.apache.org/jira/browse/HADOOP-8608 Project: Hadoop Common Issue Type: Improvement Components: conf Affects Versions: 3.0.0 Reporter: Todd Lipcon Hadoop has a lot of configurations which specify durations or intervals of time. Unfortunately these different configurations have little consistency in units - eg some are in milliseconds, some in seconds, and some in minutes. This makes it difficult for users to configure, since they have to always refer back to docs to remember the unit for each property. The proposed solution is to add an API like {{Configuration.getTimeDuration}} which allows the user to specify the units with a prefix. For example, 10ms, 10s, 10m, 10h, or even 10d. For backwards-compatibility, if the user does not specify a unit, the API can specify the default unit, and warn the user that they should specify an explicit unit instead. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HADOOP-8638) TestUlimit fails locally (on some machines)
[ https://issues.apache.org/jira/browse/HADOOP-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla reassigned HADOOP-8638: Assignee: Karthik Kambatla TestUlimit fails locally (on some machines) --- Key: HADOOP-8638 URL: https://issues.apache.org/jira/browse/HADOOP-8638 Project: Hadoop Common Issue Type: Bug Affects Versions: 1.0.3 Environment: Linux 2.6.32-71.14.1.el6.x86_64 java version 1.6.0_26 Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) Reporter: Karthik Kambatla Assignee: Karthik Kambatla Attachments: test-ulimit-out ant clean test -Dtestcase=TestUlimit -Dtest.output=yes fails locally Attaching the dump. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8638) TestUlimit fails locally (on some machines)
[ https://issues.apache.org/jira/browse/HADOOP-8638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla resolved HADOOP-8638. -- Resolution: Fixed MAPREDUCE-4036 addresses the same issue - the corresponding patch there seems to solve the issue on the local machine. We might have to re-open this should anyone experience the same in other (new) environments. TestUlimit fails locally (on some machines) --- Key: HADOOP-8638 URL: https://issues.apache.org/jira/browse/HADOOP-8638 Project: Hadoop Common Issue Type: Bug Affects Versions: 1.0.3 Environment: Linux 2.6.32-71.14.1.el6.x86_64 java version 1.6.0_26 Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) Reporter: Karthik Kambatla Assignee: Karthik Kambatla Attachments: test-ulimit-out ant clean test -Dtestcase=TestUlimit -Dtest.output=yes fails locally Attaching the dump. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426863#comment-13426863 ] Hadoop QA commented on HADOOP-8480: --- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538812/HADOOP-8480.002.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1244//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1244//console This message is automatically generated. The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Attachments: HADOOP-8480.001.patch, HADOOP-8480.002.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8620) Add -Drequire.fuse and -Drequire.snappy
[ https://issues.apache.org/jira/browse/HADOOP-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426890#comment-13426890 ] Hadoop QA commented on HADOOP-8620: --- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538811/HADOOP-8620.003.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.ha.TestZKFailoverController org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl org.apache.hadoop.hdfs.server.datanode.TestBlockReport org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1243//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1243//console This message is automatically generated. Add -Drequire.fuse and -Drequire.snappy --- Key: HADOOP-8620 URL: https://issues.apache.org/jira/browse/HADOOP-8620 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.1-alpha Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Attachments: HADOOP-8620.002.patch, HADOOP-8620.003.patch We have some optional build components which don't get built if they are not installed on the build machine. One of those components is fuse_dfs. Another is the snappy support in libhadoop.so. Unfortunately, since these components are silently ignored if they are not present, it's easy to unintentionally create an incomplete build. We should add two flags, -Drequire.fuse and -Drequire.snappy, that do exactly what the names suggest. This will make the build more repeatable. Those who want a complete build can specify these system properties to maven. If the build cannot be created as requested, it will be a hard error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8583) Globbing is not correctly handled in a few cases on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426906#comment-13426906 ] Brandon Li commented on HADOOP-8583: Basically, batch script expansion caused the problem. Here is how the expansion happens on Windows: 1. cmd.exe passed /tmp/* to %HADOOP_HOME%\bin\hadoop script 2. hadoop script also passes /tmp/* to %HADOOP_HOME%\bin\hdfs.cmd 3. the expansion is done by the shell function make_command_arguments in hdfs.cmd. If it has problem expanding the name, it just simply drop it. 4. when FsShell gets the request, either it gets a list of expended name or no name. FsShell doesn't know the expended names are from outside HDFS. When there is no name with some command, FsShell complains and prints the usage information. {noformat} :make_command_arguments if %2 == goto :eof set _count=0 if defined service_entry (set _shift=2) else (set _shift=1) if defined config_override (set /a _shift=!_shift! + 2) for %%i in (%*) do (== expansion happens here!!! set /a _count=!_count!+1 if !_count! GTR %_shift% ( if not defined _hdfsarguments ( set _hdfsarguments=%%i ) else ( set _hdfsarguments=!_hdfsarguments! %%i ) ) ) set hdfs-command-arguments=%_hdfsarguments% goto :eof {noformat} Using single quotation marks around the pathname may result in getting the pathname dropped by the above function. Therefore it is not a good workaround. Looks like the above function needs to be fixed. Globbing is not correctly handled in a few cases on Windows --- Key: HADOOP-8583 URL: https://issues.apache.org/jira/browse/HADOOP-8583 Project: Hadoop Common Issue Type: Bug Environment: Windows Reporter: Ramya Sunil Glob handling fails in a few cases on a Windows environment. For example: {noformat} c:\ hadoop dfs -ls / Found 2 items drwxrwxrwx - Administrator supergroup 0 2012-07-06 15:00 /tmp drwxr-xr-x - Administrator supergroup 0 2012-07-06 18:52 /user c:\ hadoop dfs -ls /tmpInvalid* Found 2 items drwxr-xr-x - Administrator supergroup 0 2012-07-10 18:50 /user/Administrator/sortInputDir drwxr-xr-x - Administrator supergroup 0 2012-07-10 18:50 /user/Administrator/sortOutputDir c:\ hadoop dfs -rmr /tmp/* Usage: java FsShell [-rmr [-skipTrash] src ] {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8620) Add -Drequire.fuse and -Drequire.snappy
[ https://issues.apache.org/jira/browse/HADOOP-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HADOOP-8620: Resolution: Fixed Fix Version/s: 2.2.0-alpha Target Version/s: (was: 2.0.1-alpha) Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Test failures are unrelated. I've committed this and merged to branch-2. Thanks Colin. Add -Drequire.fuse and -Drequire.snappy --- Key: HADOOP-8620 URL: https://issues.apache.org/jira/browse/HADOOP-8620 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.1-alpha Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Fix For: 2.2.0-alpha Attachments: HADOOP-8620.002.patch, HADOOP-8620.003.patch We have some optional build components which don't get built if they are not installed on the build machine. One of those components is fuse_dfs. Another is the snappy support in libhadoop.so. Unfortunately, since these components are silently ignored if they are not present, it's easy to unintentionally create an incomplete build. We should add two flags, -Drequire.fuse and -Drequire.snappy, that do exactly what the names suggest. This will make the build more repeatable. Those who want a complete build can specify these system properties to maven. If the build cannot be created as requested, it will be a hard error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HADOOP-8480: Resolution: Fixed Fix Version/s: 2.2.0-alpha Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Test failure is unrelated. I've committed this and merged to branch-2. The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Fix For: 2.2.0-alpha Attachments: HADOOP-8480.001.patch, HADOOP-8480.002.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426924#comment-13426924 ] Hudson commented on HADOOP-8480: Integrated in Hadoop-Hdfs-trunk-Commit #2613 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2613/]) HADOOP-8480. The native build should honor -DskipTests. Contributed by Colin Patrick McCabe (Revision 1368257) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368257 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Fix For: 2.2.0-alpha Attachments: HADOOP-8480.001.patch, HADOOP-8480.002.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8620) Add -Drequire.fuse and -Drequire.snappy
[ https://issues.apache.org/jira/browse/HADOOP-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426925#comment-13426925 ] Hudson commented on HADOOP-8620: Integrated in Hadoop-Hdfs-trunk-Commit #2613 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2613/]) HADOOP-8620. Add -Drequire.fuse and -Drequire.snappy. Contributed by Colin Patrick McCabe (Revision 1368251) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368251 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt Add -Drequire.fuse and -Drequire.snappy --- Key: HADOOP-8620 URL: https://issues.apache.org/jira/browse/HADOOP-8620 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.1-alpha Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Fix For: 2.2.0-alpha Attachments: HADOOP-8620.002.patch, HADOOP-8620.003.patch We have some optional build components which don't get built if they are not installed on the build machine. One of those components is fuse_dfs. Another is the snappy support in libhadoop.so. Unfortunately, since these components are silently ignored if they are not present, it's easy to unintentionally create an incomplete build. We should add two flags, -Drequire.fuse and -Drequire.snappy, that do exactly what the names suggest. This will make the build more repeatable. Those who want a complete build can specify these system properties to maven. If the build cannot be created as requested, it will be a hard error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426934#comment-13426934 ] Hudson commented on HADOOP-8480: Integrated in Hadoop-Common-trunk-Commit #2548 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2548/]) HADOOP-8480. The native build should honor -DskipTests. Contributed by Colin Patrick McCabe (Revision 1368257) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368257 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Fix For: 2.2.0-alpha Attachments: HADOOP-8480.001.patch, HADOOP-8480.002.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8620) Add -Drequire.fuse and -Drequire.snappy
[ https://issues.apache.org/jira/browse/HADOOP-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426935#comment-13426935 ] Hudson commented on HADOOP-8620: Integrated in Hadoop-Common-trunk-Commit #2548 (See [https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2548/]) HADOOP-8620. Add -Drequire.fuse and -Drequire.snappy. Contributed by Colin Patrick McCabe (Revision 1368251) Result = SUCCESS eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368251 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt Add -Drequire.fuse and -Drequire.snappy --- Key: HADOOP-8620 URL: https://issues.apache.org/jira/browse/HADOOP-8620 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.1-alpha Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Fix For: 2.2.0-alpha Attachments: HADOOP-8620.002.patch, HADOOP-8620.003.patch We have some optional build components which don't get built if they are not installed on the build machine. One of those components is fuse_dfs. Another is the snappy support in libhadoop.so. Unfortunately, since these components are silently ignored if they are not present, it's easy to unintentionally create an incomplete build. We should add two flags, -Drequire.fuse and -Drequire.snappy, that do exactly what the names suggest. This will make the build more repeatable. Those who want a complete build can specify these system properties to maven. If the build cannot be created as requested, it will be a hard error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8148) Zero-copy ByteBuffer-based compressor / decompressor API
[ https://issues.apache.org/jira/browse/HADOOP-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426942#comment-13426942 ] Tim Broberg commented on HADOOP-8148: - Regrettably, Exar has closed the San Diego office before I could complete this task. My replacement will continue my efforts, but will likely have his hands full coming up to speed. I have implemented the patch for our own codec, but we will not be providing a Snappy version any time soon. Apologies for leaving the work half done, - Tim. Zero-copy ByteBuffer-based compressor / decompressor API Key: HADOOP-8148 URL: https://issues.apache.org/jira/browse/HADOOP-8148 Project: Hadoop Common Issue Type: New Feature Components: io, performance Reporter: Tim Broberg Assignee: Tim Broberg Attachments: hadoop-8148.patch, hadoop8148.patch, zerocopyifc.tgz Per Todd Lipcon's comment in HDFS-2834, Whenever a native decompression codec is being used, ... we generally have the following copies: 1) Socket - DirectByteBuffer (in SocketChannel implementation) 2) DirectByteBuffer - byte[] (in SocketInputStream) 3) byte[] - Native buffer (set up for decompression) 4*) decompression to a different native buffer (not really a copy - decompression necessarily rewrites) 5) native buffer - byte[] with the proposed improvement we can hopefully eliminate #2,#3 for all applications, and #2,#3,and #5 for libhdfs. The interfaces in the attached patch attempt to address: A - Compression and decompression based on ByteBuffers (HDFS-2834) B - Zero-copy compression and decompression (HDFS-3051) C - Provide the caller a way to know how the max space required to hold compressed output. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8620) Add -Drequire.fuse and -Drequire.snappy
[ https://issues.apache.org/jira/browse/HADOOP-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426960#comment-13426960 ] Hudson commented on HADOOP-8620: Integrated in Hadoop-Mapreduce-trunk-Commit #2566 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2566/]) HADOOP-8620. Add -Drequire.fuse and -Drequire.snappy. Contributed by Colin Patrick McCabe (Revision 1368251) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368251 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/CMakeLists.txt Add -Drequire.fuse and -Drequire.snappy --- Key: HADOOP-8620 URL: https://issues.apache.org/jira/browse/HADOOP-8620 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 2.0.1-alpha Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Minor Fix For: 2.2.0-alpha Attachments: HADOOP-8620.002.patch, HADOOP-8620.003.patch We have some optional build components which don't get built if they are not installed on the build machine. One of those components is fuse_dfs. Another is the snappy support in libhadoop.so. Unfortunately, since these components are silently ignored if they are not present, it's easy to unintentionally create an incomplete build. We should add two flags, -Drequire.fuse and -Drequire.snappy, that do exactly what the names suggest. This will make the build more repeatable. Those who want a complete build can specify these system properties to maven. If the build cannot be created as requested, it will be a hard error. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8480) The native build should honor -DskipTests
[ https://issues.apache.org/jira/browse/HADOOP-8480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426959#comment-13426959 ] Hudson commented on HADOOP-8480: Integrated in Hadoop-Mapreduce-trunk-Commit #2566 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2566/]) HADOOP-8480. The native build should honor -DskipTests. Contributed by Colin Patrick McCabe (Revision 1368257) Result = FAILURE eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368257 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml The native build should honor -DskipTests - Key: HADOOP-8480 URL: https://issues.apache.org/jira/browse/HADOOP-8480 Project: Hadoop Common Issue Type: Bug Reporter: Colin Patrick McCabe Assignee: Colin Patrick McCabe Priority: Trivial Fix For: 2.2.0-alpha Attachments: HADOOP-8480.001.patch, HADOOP-8480.002.patch Currently, the native build does not honor -DskipTests. The native unit tests will be run even when you specify: {code} mvn compile -Pnative -DskipTests -X -e {code} This seems inconsistent; shouldn't we fix this to work like the other tests do? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira