Build failed in Jenkins: Hadoop-Common-trunk #550
See https://builds.apache.org/job/Hadoop-Common-trunk/550/changes Changes: [atm] HADOOP-8616. ViewFS configuration requires a trailing slash. Contributed by Sandy Ryza. [atm] Revert an errant commit of HADOOP-8616. [atm] HADOOP-8616. ViewFS configuration requires a trailing slash. Contributed by Sandy Ryza. [acmurthy] Updated release notes for 2.0.2-alpha. [acmurthy] HADOOP-8738. Reverted since it broke MR based system tests. [atm] HADOOP-8851. Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests. Contributed by Ivan A. Veselovsky. [harsh] MAPREDUCE-4695. Fix LocalRunner on trunk after MAPREDUCE-3223 broke it. Contributed by Harsh J. (harsh) -- [...truncated 35853 lines...] [DEBUG] (s) debug = false [DEBUG] (s) effort = Default [DEBUG] (s) failOnError = true [DEBUG] (s) findbugsXmlOutput = false [DEBUG] (s) findbugsXmlOutputDirectory = https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target [DEBUG] (s) fork = true [DEBUG] (s) includeTests = false [DEBUG] (s) localRepository =id: local url: file:///home/jenkins/.m2/repository/ layout: none [DEBUG] (s) maxHeap = 512 [DEBUG] (s) nested = false [DEBUG] (s) outputDirectory = https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/site [DEBUG] (s) outputEncoding = UTF-8 [DEBUG] (s) pluginArtifacts = [org.codehaus.mojo:findbugs-maven-plugin:maven-plugin:2.3.2:, com.google.code.findbugs:bcel:jar:1.3.9:compile, org.codehaus.gmaven:gmaven-mojo:jar:1.3:compile, org.codehaus.gmaven.runtime:gmaven-runtime-api:jar:1.3:compile, org.codehaus.gmaven.feature:gmaven-feature-api:jar:1.3:compile, org.codehaus.gmaven.runtime:gmaven-runtime-1.5:jar:1.3:compile, org.codehaus.gmaven.feature:gmaven-feature-support:jar:1.3:compile, org.codehaus.groovy:groovy-all-minimal:jar:1.5.8:compile, org.apache.ant:ant:jar:1.7.1:compile, org.apache.ant:ant-launcher:jar:1.7.1:compile, jline:jline:jar:0.9.94:compile, org.codehaus.plexus:plexus-interpolation:jar:1.1:compile, org.codehaus.gmaven:gmaven-plugin:jar:1.3:compile, org.codehaus.gmaven.runtime:gmaven-runtime-loader:jar:1.3:compile, org.codehaus.gmaven.runtime:gmaven-runtime-support:jar:1.3:compile, org.sonatype.gshell:gshell-io:jar:2.0:compile, com.thoughtworks.qdox:qdox:jar:1.10:compile, org.apache.maven.shared:file-management:jar:1.2.1:compile, org.apache.maven.shared:maven-shared-io:jar:1.1:compile, commons-lang:commons-lang:jar:2.4:compile, org.slf4j:slf4j-api:jar:1.5.10:compile, org.sonatype.gossip:gossip:jar:1.2:compile, org.apache.maven.reporting:maven-reporting-impl:jar:2.1:compile, commons-validator:commons-validator:jar:1.2.0:compile, commons-beanutils:commons-beanutils:jar:1.7.0:compile, commons-digester:commons-digester:jar:1.6:compile, commons-logging:commons-logging:jar:1.0.4:compile, oro:oro:jar:2.0.8:compile, xml-apis:xml-apis:jar:1.0.b2:compile, org.codehaus.groovy:groovy-all:jar:1.7.4:compile, org.apache.maven.reporting:maven-reporting-api:jar:3.0:compile, org.apache.maven.doxia:doxia-core:jar:1.1.3:compile, org.apache.maven.doxia:doxia-logging-api:jar:1.1.3:compile, xerces:xercesImpl:jar:2.9.1:compile, commons-httpclient:commons-httpclient:jar:3.1:compile, commons-codec:commons-codec:jar:1.2:compile, org.apache.maven.doxia:doxia-sink-api:jar:1.1.3:compile, org.apache.maven.doxia:doxia-decoration-model:jar:1.1.3:compile, org.apache.maven.doxia:doxia-site-renderer:jar:1.1.3:compile, org.apache.maven.doxia:doxia-module-xhtml:jar:1.1.3:compile, org.apache.maven.doxia:doxia-module-fml:jar:1.1.3:compile, org.codehaus.plexus:plexus-i18n:jar:1.0-beta-7:compile, org.codehaus.plexus:plexus-velocity:jar:1.1.7:compile, org.apache.velocity:velocity:jar:1.5:compile, commons-collections:commons-collections:jar:3.2:compile, org.apache.maven.shared:maven-doxia-tools:jar:1.2.1:compile, commons-io:commons-io:jar:1.4:compile, com.google.code.findbugs:findbugs-ant:jar:1.3.9:compile, com.google.code.findbugs:findbugs:jar:1.3.9:compile, com.google.code.findbugs:jsr305:jar:1.3.9:compile, com.google.code.findbugs:jFormatString:jar:1.3.9:compile, com.google.code.findbugs:annotations:jar:1.3.9:compile, dom4j:dom4j:jar:1.6.1:compile, jaxen:jaxen:jar:1.1.1:compile, jdom:jdom:jar:1.0:compile, xom:xom:jar:1.0:compile, xerces:xmlParserAPIs:jar:2.6.2:compile, xalan:xalan:jar:2.6.0:compile, com.ibm.icu:icu4j:jar:2.6.1:compile, asm:asm:jar:3.1:compile, asm:asm-analysis:jar:3.1:compile, asm:asm-commons:jar:3.1:compile, asm:asm-util:jar:3.1:compile, asm:asm-tree:jar:3.1:compile, asm:asm-xml:jar:3.1:compile, jgoodies:plastic:jar:1.2.0:compile, org.codehaus.plexus:plexus-resources:jar:1.0-alpha-4:compile, org.codehaus.plexus:plexus-utils:jar:1.5.1:compile] [DEBUG] (s) project = MavenProject: org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @
[jira] [Created] (HADOOP-8870) NullPointerException when glob doesn't return files
Jaka Jancar created HADOOP-8870: --- Summary: NullPointerException when glob doesn't return files Key: HADOOP-8870 URL: https://issues.apache.org/jira/browse/HADOOP-8870 Project: Hadoop Common Issue Type: Bug Components: fs, fs/s3 Affects Versions: 1.0.3, 0.20.205.0 Reporter: Jaka Jancar Reading {code}s3n://bucket/{a/*,b/*,c/*}{code} if one of the globs matches nothing, I get: {code} Exception in thread main java.lang.NullPointerException at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:992) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:177) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208) at spark.HadoopRDD.init(HadoopRDD.scala:51) at spark.SparkContext.hadoopFile(SparkContext.scala:186) at spark.SparkContext.textFile(SparkContext.scala:155) at com.celtra.analyzer.LogAnalyzer.analyzeSufficientS3Logs(LogAnalyzer.scala:52) at com.celtra.analyzer.App$.main(App.scala:164) at com.celtra.analyzer.App.main(App.scala) {code} I'm not sure whether this is specific to S3 or all filesystems. This was occuring in 0.20.205 and I confirmed it's still present in 1.0.3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8871) FileUtil.symLink: if shell command fails, logging message does not correctly print command attempted
Chris Nauroth created HADOOP-8871: - Summary: FileUtil.symLink: if shell command fails, logging message does not correctly print command attempted Key: HADOOP-8871 URL: https://issues.apache.org/jira/browse/HADOOP-8871 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1-win Reporter: Chris Nauroth When the shell command call to create a symlink fails, we attempt to log the details. We log cmd, which is a String[], and Java prints it using the default Object.toString implementation. We need to change this to print the contents of the array so that we see the actual command executed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8872) FileSystem#length returns zero for symlinks on windows+java6
Ivan Mitic created HADOOP-8872: -- Summary: FileSystem#length returns zero for symlinks on windows+java6 Key: HADOOP-8872 URL: https://issues.apache.org/jira/browse/HADOOP-8872 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic RawLocalFileSystem does not work well with symbolic links on Windows. Specifically, calling FileSystem#lengh on the path that is a symlink will return zero. This causes problems in some objects that use LocalFileSystem to access local files. One example is a SequenceFile. The issue is caused by Java6 File#length returning zero for symbolic links on Windows. On Java7, we will no longer have this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8873) Port HADOOP-8175 (Add mkdir -p flag) to branch-1
Eli Collins created HADOOP-8873: --- Summary: Port HADOOP-8175 (Add mkdir -p flag) to branch-1 Key: HADOOP-8873 URL: https://issues.apache.org/jira/browse/HADOOP-8873 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.0.0 Reporter: Eli Collins Per HADOOP-8551 let's port the mkdir -p option to branch-1 for a 1.x release to help users transition to the new shell behavior. In Hadoop 2.x mkdir currently requires the -p option to create parent directories but a program that specifies it won't work on 1.x since it doesn't support this option. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8871) FileUtil.symLink: if shell command fails, logging message does not correctly print command attempted
[ https://issues.apache.org/jira/browse/HADOOP-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas resolved HADOOP-8871. - Resolution: Fixed Fix Version/s: 1-win Hadoop Flags: Reviewed I committed the patch. Thank you Chris. FileUtil.symLink: if shell command fails, logging message does not correctly print command attempted Key: HADOOP-8871 URL: https://issues.apache.org/jira/browse/HADOOP-8871 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1-win Reporter: Chris Nauroth Assignee: Chris Nauroth Fix For: 1-win Attachments: HADOOP-8871-branch-1-win.patch, HADOOP-8871-branch-1-win.patch When the shell command call to create a symlink fails, we attempt to log the details. We log cmd, which is a String[], and Java prints it using the default Object.toString implementation. We need to change this to print the contents of the array so that we see the actual command executed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8874) HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly handled
John Gordon created HADOOP-8874: --- Summary: HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly handled Key: HADOOP-8874 URL: https://issues.apache.org/jira/browse/HADOOP-8874 Project: Hadoop Common Issue Type: Bug Components: scripts, security Affects Versions: 1-win Environment: Called from external process with -D flag vs HADOOP_HOME set. Reporter: John Gordon Fix For: 1-win There is a -D flag to set hadoop.home, which is specified in the hadoop wrapper scripts. This is particularly useful if you want SxS execution of two or more versions of hadoop (e.g. rolling upgrade). However, it isn't honored at all. HADOOP_HOME is used in 3-4 places to find non-java hadoop components such as schedulers, scripts, shared libraries, or with the Windows changes -- binaries. Ideally, these should all resolve the path in a consistent manner, and callers shuold have a similar onus applied when trying to resolve an invalid path to their components. This is particularly relevant to scripts or binaries that may have security impact, as absolute path resolution is generally safer and more stable than relative path resolution. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira