Build failed in Jenkins: Hadoop-Common-0.23-Build #241
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/241/changes Changes: [bobby] svn merge -c 1333144 FIXES: MAPREDUCE-4210. Expose listener address for WebApp (Daryn Sharp via bobby) -- [...truncated 12306 lines...] [javadoc] Loading source files for package org.apache.hadoop.fs.local... [javadoc] Loading source files for package org.apache.hadoop.fs.permission... [javadoc] Loading source files for package org.apache.hadoop.fs.s3... [javadoc] Loading source files for package org.apache.hadoop.fs.s3native... [javadoc] Loading source files for package org.apache.hadoop.fs.shell... [javadoc] Loading source files for package org.apache.hadoop.fs.viewfs... [javadoc] Loading source files for package org.apache.hadoop.http... [javadoc] Loading source files for package org.apache.hadoop.http.lib... [javadoc] Loading source files for package org.apache.hadoop.io... [javadoc] Loading source files for package org.apache.hadoop.io.compress... [javadoc] Loading source files for package org.apache.hadoop.io.compress.bzip2... [javadoc] Loading source files for package org.apache.hadoop.io.compress.lz4... [javadoc] Loading source files for package org.apache.hadoop.io.compress.snappy... [javadoc] Loading source files for package org.apache.hadoop.io.compress.zlib... [javadoc] Loading source files for package org.apache.hadoop.io.file.tfile... [javadoc] Loading source files for package org.apache.hadoop.io.nativeio... [javadoc] Loading source files for package org.apache.hadoop.io.retry... [javadoc] Loading source files for package org.apache.hadoop.io.serializer... [javadoc] Loading source files for package org.apache.hadoop.io.serializer.avro... [javadoc] Loading source files for package org.apache.hadoop.ipc... [javadoc] Loading source files for package org.apache.hadoop.ipc.metrics... [javadoc] Loading source files for package org.apache.hadoop.jmx... [javadoc] Loading source files for package org.apache.hadoop.log... [javadoc] Loading source files for package org.apache.hadoop.log.metrics... [javadoc] Loading source files for package org.apache.hadoop.metrics... [javadoc] Loading source files for package org.apache.hadoop.metrics.file... [javadoc] Loading source files for package org.apache.hadoop.metrics.ganglia... [javadoc] Loading source files for package org.apache.hadoop.metrics.jvm... [javadoc] Loading source files for package org.apache.hadoop.metrics.spi... [javadoc] Loading source files for package org.apache.hadoop.metrics.util... [javadoc] Loading source files for package org.apache.hadoop.metrics2... [javadoc] Loading source files for package org.apache.hadoop.metrics2.annotation... [javadoc] Loading source files for package org.apache.hadoop.metrics2.filter... [javadoc] Loading source files for package org.apache.hadoop.metrics2.impl... [javadoc] Loading source files for package org.apache.hadoop.metrics2.lib... [javadoc] Loading source files for package org.apache.hadoop.metrics2.sink... [javadoc] Loading source files for package org.apache.hadoop.metrics2.sink.ganglia... [javadoc] Loading source files for package org.apache.hadoop.metrics2.source... [javadoc] Loading source files for package org.apache.hadoop.metrics2.util... [javadoc] Loading source files for package org.apache.hadoop.net... [javadoc] Loading source files for package org.apache.hadoop.record... [javadoc] Loading source files for package org.apache.hadoop.record.compiler... [javadoc] Loading source files for package org.apache.hadoop.record.compiler.ant... [javadoc] Loading source files for package org.apache.hadoop.record.compiler.generated... [javadoc] Loading source files for package org.apache.hadoop.record.meta... [javadoc] Loading source files for package org.apache.hadoop.security... [javadoc] Loading source files for package org.apache.hadoop.security.authorize... [javadoc] Loading source files for package org.apache.hadoop.security.token... [javadoc] Loading source files for package org.apache.hadoop.security.token.delegation... [javadoc] Loading source files for package org.apache.hadoop.tools... [javadoc] Loading source files for package org.apache.hadoop.util... [javadoc] Loading source files for package org.apache.hadoop.util.bloom... [javadoc] Loading source files for package org.apache.hadoop.util.hash... [javadoc] 2 errors [xslt] Processing https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-common/target/findbugsXml.xml to https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-common/target/site/findbugs.html [xslt] Loading stylesheet /home/jenkins/tools/findbugs/latest/src/xsl/default.xsl [INFO] Executed tasks [INFO] [INFO] --- maven-antrun-plugin:1.6:run (pre-dist) @ hadoop-common --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO]
Build failed in Jenkins: Hadoop-Common-trunk #395
See https://builds.apache.org/job/Hadoop-Common-trunk/395/changes Changes: [eli] HADOOP-8347. Hadoop Common logs misspell 'successful'. Contributed by Philip Zeyliger [atm] HDFS-3351. NameNode#initializeGenericKeys should always set fs.defaultFS regardless of whether HA or Federation is enabled. Contributed by Aaron T. Myers. [todd] HDFS-3330. If GetImageServlet throws an Error or RTE, response should not have HTTP OK status. Contributed by Todd Lipcon. [tucu] MAPREDUCE-4219. make default container-executor.conf.dir be a path relative to the container-executor binary. (rvs via tucu) [tucu] HDFS-3336. hdfs launcher script will be better off not special casing namenode command with regards to hadoop.security.logger (rvs via tucu) [tucu] HADOOP-8214. make hadoop script recognize a full set of deprecated commands (rvs via tucu) [tucu] HADOOP-8342. HDFS command fails with exception following merge of HADOOP-8325 (tucu) [bobby] MAPREDUCE-4210. Expose listener address for WebApp (Daryn Sharp via bobby) [atm] HADOOP-8185. Update namenode -format documentation and add -nonInteractive and -force. Contributed by Arpit Gupta. [bobby] MAPREDUCE-3173. MRV2 UI doesn't work properly without internet (Devaraj K via bobby) -- [...truncated 117635 lines...] [DEBUG] (f) reactorProjects = [MavenProject: org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT @ https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-annotations/pom.xml, MavenProject: org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT @ https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-auth/pom.xml, MavenProject: org.apache.hadoop:hadoop-auth-examples:3.0.0-SNAPSHOT @ https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-auth-examples/pom.xml, MavenProject: org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT @ https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/hadoop-common/pom.xml, MavenProject: org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml] [DEBUG] (f) useDefaultExcludes = true [DEBUG] (f) useDefaultManifestFile = false [DEBUG] -- end configuration -- [INFO] [INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ hadoop-common-project --- [DEBUG] Configuring mojo org.apache.maven.plugins:maven-enforcer-plugin:1.0:enforce from plugin realm ClassRealm[pluginorg.apache.maven.plugins:maven-enforcer-plugin:1.0, parent: sun.misc.Launcher$AppClassLoader@126b249] [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-enforcer-plugin:1.0:enforce' with basic configurator -- [DEBUG] (s) fail = true [DEBUG] (s) failFast = false [DEBUG] (f) ignoreCache = false [DEBUG] (s) project = MavenProject: org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml [DEBUG] (s) version = [3.0.2,) [DEBUG] (s) version = 1.6 [DEBUG] (s) rules = [org.apache.maven.plugins.enforcer.RequireMavenVersion@b03b35, org.apache.maven.plugins.enforcer.RequireJavaVersion@1ff39db] [DEBUG] (s) session = org.apache.maven.execution.MavenSession@1c9ce70 [DEBUG] (s) skip = false [DEBUG] -- end configuration -- [DEBUG] Executing rule: org.apache.maven.plugins.enforcer.RequireMavenVersion [DEBUG] Rule org.apache.maven.plugins.enforcer.RequireMavenVersion is cacheable. [DEBUG] Key org.apache.maven.plugins.enforcer.RequireMavenVersion -937312197 was found in the cache [DEBUG] The cached results are still valid. Skipping the rule: org.apache.maven.plugins.enforcer.RequireMavenVersion [DEBUG] Executing rule: org.apache.maven.plugins.enforcer.RequireJavaVersion [DEBUG] Rule org.apache.maven.plugins.enforcer.RequireJavaVersion is cacheable. [DEBUG] Key org.apache.maven.plugins.enforcer.RequireJavaVersion 48569 was found in the cache [DEBUG] The cached results are still valid. Skipping the rule: org.apache.maven.plugins.enforcer.RequireJavaVersion [INFO] [INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ hadoop-common-project --- [DEBUG] Configuring mojo org.apache.maven.plugins:maven-site-plugin:3.0:attach-descriptor from plugin realm ClassRealm[pluginorg.apache.maven.plugins:maven-site-plugin:3.0, parent: sun.misc.Launcher$AppClassLoader@126b249] [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-site-plugin:3.0:attach-descriptor' with basic configurator -- [DEBUG] (f) basedir = https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project [DEBUG] (f) inputEncoding = UTF-8 [DEBUG] (f) localRepository =id: local url: file:///home/jenkins/.m2/repository/ layout: none [DEBUG] (f) outputEncoding = UTF-8 [DEBUG] (f) pomPackagingOnly = true [DEBUG] (f) project = MavenProject:
[jira] [Created] (HADOOP-8351) Add exclude/include file , need restart NN or RM.
xieguiming created HADOOP-8351: -- Summary: Add exclude/include file , need restart NN or RM. Key: HADOOP-8351 URL: https://issues.apache.org/jira/browse/HADOOP-8351 Project: Hadoop Common Issue Type: Bug Components: util Environment: suse Reporter: xieguiming yarn.resourcemanager.nodes.include-path default value is , if we need add one include file. and we must restart the RM. I suggest that adding one include or exclude file, no need restart the RM. only execute the refresh command. NN is the same. Modify the HostsFileReader class: public HostsFileReader(String inFile, String exFile) to public HostsFileReader(Configuration conf, String NODES_INCLUDE_FILE_PATH,String DEFAULT_NODES_INCLUDE_FILE_PATH, String NODES_EXCLUDE_FILE_PATH,String DEFAULT_NODES_EXCLUDE_FILE_PATH) and thus, we can read the config file dynamic. and no need to restart the NM/NN. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-8351) Add exclude/include file , need restart NN or RM.
[ https://issues.apache.org/jira/browse/HADOOP-8351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Joseph Evans reopened HADOOP-8351: - Add exclude/include file , need restart NN or RM. - Key: HADOOP-8351 URL: https://issues.apache.org/jira/browse/HADOOP-8351 Project: Hadoop Common Issue Type: Bug Components: util Environment: suse Reporter: xieguiming yarn.resourcemanager.nodes.include-path default value is , if we need add one include file. and we must restart the RM. I suggest that adding one include or exclude file, no need restart the RM. only execute the refresh command. NN is the same. Modify the HostsFileReader class: public HostsFileReader(String inFile, String exFile) to public HostsFileReader(Configuration conf, String NODES_INCLUDE_FILE_PATH,String DEFAULT_NODES_INCLUDE_FILE_PATH, String NODES_EXCLUDE_FILE_PATH,String DEFAULT_NODES_EXCLUDE_FILE_PATH) and thus, we can read the config file dynamic. and no need to restart the NM/NN. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8352) We should always generate a new configure script for the c++ code
Owen O'Malley created HADOOP-8352: - Summary: We should always generate a new configure script for the c++ code Key: HADOOP-8352 URL: https://issues.apache.org/jira/browse/HADOOP-8352 Project: Hadoop Common Issue Type: Improvement Reporter: Owen O'Malley If you are compiling c++, you should always generate a configure script. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
Roman Shaposhnik created HADOOP-8353: Summary: hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop Key: HADOOP-8353 URL: https://issues.apache.org/jira/browse/HADOOP-8353 Project: Hadoop Common Issue Type: Improvement Components: scripts Affects Versions: 0.23.1 Reporter: Roman Shaposhnik Assignee: Roman Shaposhnik Fix For: 2.0.0 The way that stop actions is implemented is a simple SIGTERM sent to the JVM. There's a time delay between when the action is called and when the process actually exists. This can be misleading to the callers of the *-daemon.sh scripts since they expect stop action to return when process is actually stopped. I suggest we augment the stop action with a time-delay check for the process status and a SIGKILL once the delay has expired. I understand that sending SIGKILL is a measure of last resort and is generally frowned upon among init.d script writers, but the excuse we have for Hadoop is that it is engineered to be a fault tolerant system and thus there's not danger of putting system into an incontinent state by a violent SIGKILL. Of course, the time delay will be long enough to make SIGKILL event a rare condition. Finally, there's always an option of an exponential back-off type of solution if we decide that SIGKILL timeout is short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8354) test-patch findbugs may fail if a dependent module is changed
Tom White created HADOOP-8354: - Summary: test-patch findbugs may fail if a dependent module is changed Key: HADOOP-8354 URL: https://issues.apache.org/jira/browse/HADOOP-8354 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Tom White This can happen when code in a dependent module is changed, but the change isn't picked up. E.g. https://issues.apache.org/jira/browse/MAPREDUCE-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266867#comment-13266867 We can fix this by running 'mvn install -DskipTests -Dmaven.javadoc.skip=true' first. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8322) Log entry for successful auth has misspelling
[ https://issues.apache.org/jira/browse/HADOOP-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jakob Homan resolved HADOOP-8322. - Resolution: Duplicate Log entry for successful auth has misspelling - Key: HADOOP-8322 URL: https://issues.apache.org/jira/browse/HADOOP-8322 Project: Hadoop Common Issue Type: Improvement Reporter: Jakob Homan Assignee: Boris Shkolnik Priority: Minor Labels: newbie Server.java: {code} private static final String AUTH_SUCCESSFULL_FOR = Auth successfull for ; {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
Question about Jenkins build failures building hadoop-project
Hello, I have a question about the build failure messages from Jenkins the past few days. I have a patch that was approved to be merged for branch-2 [mattf] HDFS-3265. PowerPc Build error. Contributed by Kumar Ravi. Although I was successful in building hadoop-common and hadoop-hdfs with the above patch on an x86_64 platform running Sun JDK 1.6, there was the following failure message that I saw on this mailing list: + /home/jenkins/tools/maven/latest/bin/mvn test -Pclover -DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license Build step 'Execute shell' marked build as failure I have been unable to recreate this problem. I also notice there has been at least 1 such build failure every day with exactly the same result. I would appreciate any help in trying to understand this message so that I can get my patch merged into branch-2. Regards, Kumar Kumar Ravi IBM Linux Technology Center IBM Master Inventor 11501 Burnet Road, Austin, TX 78758 Tel.: (512)286-8179
[jira] [Resolved] (HADOOP-8352) We should always generate a new configure script for the c++ code
[ https://issues.apache.org/jira/browse/HADOOP-8352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley resolved HADOOP-8352. --- Resolution: Fixed Fix Version/s: 1.1.0 1.0.3 Assignee: Owen O'Malley We should always generate a new configure script for the c++ code - Key: HADOOP-8352 URL: https://issues.apache.org/jira/browse/HADOOP-8352 Project: Hadoop Common Issue Type: Improvement Reporter: Owen O'Malley Assignee: Owen O'Malley Fix For: 1.0.3, 1.1.0 Attachments: gen-c++.lst, git-ignore.patch, hadoop-8352.patch If you are compiling c++, you should always generate a configure script. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8355) SPNEGO filter throws/logs exception when authentication fails
Alejandro Abdelnur created HADOOP-8355: -- Summary: SPNEGO filter throws/logs exception when authentication fails Key: HADOOP-8355 URL: https://issues.apache.org/jira/browse/HADOOP-8355 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Priority: Minor Fix For: 2.0.0 if the auth-token is NULL means the authenticator has not authenticated the request and it has already issue an UNAUTHORIZED response, there is no need to throw an exception and then immediately catch it and log it. The 'else throw' can be removed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8356) FileSystem service loading mechanism should print the FileSystem impl it is failing to load
Alejandro Abdelnur created HADOOP-8356: -- Summary: FileSystem service loading mechanism should print the FileSystem impl it is failing to load Key: HADOOP-8356 URL: https://issues.apache.org/jira/browse/HADOOP-8356 Project: Hadoop Common Issue Type: Improvement Components: fs Affects Versions: 2.0.0 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Fix For: 2.0.0 If by mistake somebody adds a FileSystem implementation to the service definition that is not serviceloader friendly (it does not override the getScheme() method) or adds to the classpath an older one defined in the service definition, the exception thrown should print out the FileSystem class failing to load. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira