Build failed in Jenkins: Hadoop-Common-0.23-Build #413
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/413/ -- [...truncated 11014 lines...] [INFO] [INFO] --- maven-clover2-plugin:3.0.5:clover (clover) @ hadoop-auth --- [INFO] Using /default-clover-report descriptor. [INFO] Using Clover report descriptor: /tmp/mvn224415068296247167resource [INFO] Clover Version 3.0.2, built on April 13 2010 (build-790) [INFO] Loaded from: /home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar [INFO] Clover: Open Source License registered to Apache. [INFO] Clover is enabled with initstring 'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/hadoop-coverage.db' [WARNING] Clover historical directory [https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/history] does not exist, skipping Clover historical report generation ([https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover]) [INFO] Clover Version 3.0.2, built on April 13 2010 (build-790) [INFO] Loaded from: /home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar [INFO] Clover: Open Source License registered to Apache. [INFO] Loading coverage database from: 'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/hadoop-coverage.db' [INFO] Writing HTML report to 'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover' [INFO] Done. Processed 4 packages in 1056ms (264ms per package). [INFO] Clover Version 3.0.2, built on April 13 2010 (build-790) [INFO] Loaded from: /home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar [INFO] Clover: Open Source License registered to Apache. [INFO] Clover is enabled with initstring 'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/hadoop-coverage.db' [WARNING] Clover historical directory [https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/history] does not exist, skipping Clover historical report generation ([https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/clover.xml]) [INFO] Clover Version 3.0.2, built on April 13 2010 (build-790) [INFO] Loaded from: /home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar [INFO] Clover: Open Source License registered to Apache. [INFO] Loading coverage database from: 'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/hadoop-coverage.db' [INFO] Writing report to 'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth/target/clover/clover.xml' [INFO] [INFO] [INFO] Building Apache Hadoop Auth Examples 0.23.5-SNAPSHOT [INFO] [INFO] [INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-auth-examples --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ hadoop-auth-examples --- [INFO] Wrote classpath file 'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth-examples/target/classes/mrapp-generated-classpath'. [INFO] [INFO] --- maven-clover2-plugin:3.0.5:setup (setup) @ hadoop-auth-examples --- [INFO] Clover Version 3.0.2, built on April 13 2010 (build-790) [INFO] Loaded from: /home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar [INFO] Clover: Open Source License registered to Apache. [INFO] Creating new database at 'https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth-examples/target/clover/hadoop-coverage.db'. [INFO] Processing files at 1.6 source level. [INFO] Clover all over. Instrumented 3 files (1 package). [INFO] Elapsed time = 0.016 secs. (187.5 files/sec, 17,687.5 srclines/sec) [INFO] No Clover instrumentation done on source files in: [https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth-examples/src/test/java] as no matching sources files found [INFO] [INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ hadoop-auth-examples --- [INFO] Using default encoding to copy filtered resources. [INFO] [INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ hadoop-auth-examples --- [INFO] Compiling 3 source files to https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-auth-examples/target/classes
[jira] [Resolved] (HADOOP-8885) Need to add fs shim to use QFS
[ https://issues.apache.org/jira/browse/HADOOP-8885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] thilee resolved HADOOP-8885. Resolution: Fixed Release Note: Made Hadoop QFS plugin as part of Quantcast File System open-source project. Users can download the Hadoop QFS jar + libs and dynamically use it with Hadoop. Based on suggestions in the hadoop-common mailing list, we decided to take the QFS implementation out of Hadoop Common/core. QFS Hadoop plugin is now maintained at http://quantcast.github.com/qfs/ Instructions for obtaining and using the QFS Hadoop plugin are at https://github.com/quantcast/qfs/wiki/Migration-Guide Need to add fs shim to use QFS -- Key: HADOOP-8885 URL: https://issues.apache.org/jira/browse/HADOOP-8885 Project: Hadoop Common Issue Type: New Feature Components: fs Reporter: thilee Assignee: thilee Fix For: 3.0.0, 2.0.1-alpha, 2.0.0-alpha, 0.23.3, 1.0.3, 1.0.2 Attachments: HADOOP-8885-branch-1.patch, HADOOP-8885-trunk.patch Original Estimate: 168h Remaining Estimate: 168h Quantcast has released QFS 1.0 (http://quantcast.github.com/qfs), a C++ distributed filesystem based on Kosmos File System(KFS). QFS comes with various feature, performance, and stability improvements over KFS. A hadoop 'fs' shim needs be added to support QFS through 'qfs://' URIs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: Need to add fs shim to use QFS
On 26 October 2012 01:24, Thilee Subramaniam thi...@quantcast.com wrote: We have made the changes recommended here, and made available a 'Hadoop QFS jar' with QFS. This plugin and the QFS libraries will be maintained released by the QFS open-source project. Please see the download and usage instructions at https://github.com/quantcast/qfs/wiki/Migration-Guide The QFS tarball contains a hadoop-qfs jar each for Hadoop 0.23.4, 1.0.2, 1.0.4, 1.1.0, and 2.0.2-alpha. Since the interfaces seem similar, I am not sure if this is an overkill: one each for trunk and branch1 may suffice. Could you comment on this? Java is a lot more forgiving than C/C++; one built against 1.0.4 should suffice for all; if you are being over cautious, branch-1 and trunk should be enough Also, is there documentation on Apache Hadoop website that describe available alternatives to HDFS (or how to add an alternative file system to Hadoop)? Please let us know. If there isn't something on wiki.apache.org/hadoop there should be: create a login there, then email back your username and you can have the editor rights to put something up -I'd suggest a page on Alternate Filesystems -steve
[jira] [Created] (HADOOP-8986) Server$Call object is never released after it is sent
Robert Joseph Evans created HADOOP-8986: --- Summary: Server$Call object is never released after it is sent Key: HADOOP-8986 URL: https://issues.apache.org/jira/browse/HADOOP-8986 Project: Hadoop Common Issue Type: Bug Components: ipc Affects Versions: 0.23.4, 2.0.2-alpha Reporter: Robert Joseph Evans Assignee: Robert Joseph Evans Priority: Critical When an IPC response cannot be returned without blocking the Server$Call object attached to the SelectionKey of the write selector. However the call object is never removed from the SlectionKey. So for a connection that rarely has large responses but is long lived there is a lot of data. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8987) TestLocalMRNotification.testMR failed in Hudson
Robert Parker created HADOOP-8987: - Summary: TestLocalMRNotification.testMR failed in Hudson Key: HADOOP-8987 URL: https://issues.apache.org/jira/browse/HADOOP-8987 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Robert Parker TestLocalMRNotification.testMR failed in Hudson, from [build #3911|http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3911/testReport/] to the latest, [build #3917|http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3917/testReport/]. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-8567) Port conf servlet to dump running configuration to branch 1.x
[ https://issues.apache.org/jira/browse/HADOOP-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas reopened HADOOP-8567: - Reopening to commit the issue. Port conf servlet to dump running configuration to branch 1.x -- Key: HADOOP-8567 URL: https://issues.apache.org/jira/browse/HADOOP-8567 Project: Hadoop Common Issue Type: New Feature Components: conf Affects Versions: 1.0.0 Reporter: Junping Du Assignee: Jing Zhao Fix For: 1.2.0 Attachments: Hadoop.8567.branch-1.001.patch, Hadoop.8567.branch-1.002.patch, Hadoop.8567.branch-1.003.patch, Hadoop.8567.branch-1.004.patch HADOOP-6408 provide conf servlet that can dump running configuration which great helps admin to trouble shooting the configuration issue. However, that patch works on branch after 0.21 only and should be backport to branch 1.x. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8567) Port conf servlet to dump running configuration to branch 1.x
[ https://issues.apache.org/jira/browse/HADOOP-8567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HADOOP-8567. - Resolution: Fixed Fix Version/s: 1.2.0 Committed the patch to branch-1. Thank you Jing. Port conf servlet to dump running configuration to branch 1.x -- Key: HADOOP-8567 URL: https://issues.apache.org/jira/browse/HADOOP-8567 Project: Hadoop Common Issue Type: New Feature Components: conf Affects Versions: 1.0.0 Reporter: Junping Du Assignee: Jing Zhao Fix For: 1.2.0 Attachments: Hadoop.8567.branch-1.001.patch, Hadoop.8567.branch-1.002.patch, Hadoop.8567.branch-1.003.patch, Hadoop.8567.branch-1.004.patch HADOOP-6408 provide conf servlet that can dump running configuration which great helps admin to trouble shooting the configuration issue. However, that patch works on branch after 0.21 only and should be backport to branch 1.x. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8988) Backport HADOOP-8343 to branch-1
Jing Zhao created HADOOP-8988: - Summary: Backport HADOOP-8343 to branch-1 Key: HADOOP-8988 URL: https://issues.apache.org/jira/browse/HADOOP-8988 Project: Hadoop Common Issue Type: New Feature Affects Versions: 1.0.0 Reporter: Jing Zhao Assignee: Jing Zhao Attachments: Hadoop.8343.backport.001.patch Backport HADOOP-8343 to branch-1 so as to specifically control the authorization requirements for accessing /jmx, /metrics, and /conf in branch-1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8968) Add a flag to completely disable the worker version check
[ https://issues.apache.org/jira/browse/HADOOP-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins resolved HADOOP-8968. - Resolution: Fixed Hadoop Flags: Reviewed I've committed this. Thanks Tucu. Add a flag to completely disable the worker version check - Key: HADOOP-8968 URL: https://issues.apache.org/jira/browse/HADOOP-8968 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 1.2.0 Attachments: HADOOP-8968.patch, HADOOP-8968.patch, HADOOP-8968.patch, HADOOP-8968.patch, HADOOP-8968.patch The current logic in the TaskTracker and the DataNode to allow a relax version check with the JobTracker and NameNode works only if the versions of Hadoop are exactly the same. We should add a switch to disable version checking completely, to enable rolling upgrades between compatible versions (typically patch versions). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira