Re: Maven inconsistently overriding Zookeeper version
Hi Sean, Probably test scope is missing in the zookeeper dependency of hadoop-yarn-server-resource-manager test-jar. Attaching a patch to fix this. Here is a log after applying the patch: https://gist.github.com/aajisaka/057cb3d6d26c05a541f5b5de06f70ded Regards, Akira On 2017/07/19 2:26, Sean Mackrory wrote: There's some Maven magic going on here that I don't understand: https://gist.github.com/mackrorysd/61c689f04c3595bcda9c256ec6b2da75 On line 2 of the gist, you can see me checking which ZooKeeper artifacts get picked up when running dependency:tree with the ZooKeeper version overridden with -Dzookeeper.version. It's all 3.5.3-beta, the version I'm trying to override it to. On line 84 of the gist, you can see me doing a clean build of Hadoop with the same ZooKeeper version, but at the end it appears that hadoop-yarn-server-resourcemanager is sometimes depending on 3.4.9 (the version originally in the POM) and other times 3.5.3-beta. I can't seem to work around that, or even explain it. Anybody have ideas? diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml index 9b8f8afc687..41d24d19bae 100644 --- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml +++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml @@ -216,6 +216,7 @@ org.apache.zookeeper zookeeper test-jar + test - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/468/ [Jul 17, 2017 1:54:16 PM] (szetszwo) HDFS-12138. Remove redundant 'public' modifiers from BlockCollection. [Jul 17, 2017 2:11:14 PM] (Arun Suresh) YARN-6706. Refactor ContainerScheduler to make oversubscription change [Jul 17, 2017 9:32:37 PM] (aajisaka) HADOOP-14539. Move commons logging APIs over to slf4j in hadoop-common. [Jul 17, 2017 11:19:09 PM] (sunilg) Addendum patch for YARN-5731 -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs-client Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2888] org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 302] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1339] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:[line 133] Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2085] Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 642] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 719]
[jira] [Created] (HADOOP-14669) GenericTestUtils.waitFor should use monotonic time
Jason Lowe created HADOOP-14669: --- Summary: GenericTestUtils.waitFor should use monotonic time Key: HADOOP-14669 URL: https://issues.apache.org/jira/browse/HADOOP-14669 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0-alpha4 Reporter: Jason Lowe Priority: Trivial GenericTestUtils.waitFor should be calling Time.monotonicNow rather than Time.now. Otherwise if the system clock adjusts during unit testing the timeout period could be incorrect. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14668) Remove Configurable Default Sequence File Compression Type
BELUGA BEHR created HADOOP-14668: Summary: Remove Configurable Default Sequence File Compression Type Key: HADOOP-14668 URL: https://issues.apache.org/jira/browse/HADOOP-14668 Project: Hadoop Common Issue Type: Improvement Components: io Affects Versions: 3.0.0-alpha3 Reporter: BELUGA BEHR Priority: Trivial Fix For: 2.8.1 It is confusing to have two different ways to set the Sequence File compression type. In a basic configuration, I can set _mapreduce.output.fileoutputformat.compress.type_ or _io.seqfile.compression.type_. If I would like to set a default value, I should set it by setting the cluster environment's mapred-site.xml file setting for _mapreduce.output.fileoutputformat.compress.type_. Please remove references to this magic string _io.seqfile.compression.type_, remove the {{setDefaultCompressionType}} method, and have {{getDefaultCompressionType}} return value hard-coded to {{CompressionType.RECORD}}. This will make administration easier as I have to only interrogate one configuration. {code:title=org.apache.hadoop.io.SequenceFile} /** * Get the compression type for the reduce outputs * @param job the job config to look in * @return the kind of compression to use */ static public CompressionType getDefaultCompressionType(Configuration job) { String name = job.get("io.seqfile.compression.type"); return name == null ? CompressionType.RECORD : CompressionType.valueOf(name); } /** * Set the default compression type for sequence files. * @param job the configuration to modify * @param val the new compression type (none, block, record) */ static public void setDefaultCompressionType(Configuration job, CompressionType val) { job.set("io.seqfile.compression.type", val.toString()); } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Maven inconsistently overriding Zookeeper version
There's some Maven magic going on here that I don't understand: https://gist.github.com/mackrorysd/61c689f04c3595bcda9c256ec6b2da75 On line 2 of the gist, you can see me checking which ZooKeeper artifacts get picked up when running dependency:tree with the ZooKeeper version overridden with -Dzookeeper.version. It's all 3.5.3-beta, the version I'm trying to override it to. On line 84 of the gist, you can see me doing a clean build of Hadoop with the same ZooKeeper version, but at the end it appears that hadoop-yarn-server-resourcemanager is sometimes depending on 3.4.9 (the version originally in the POM) and other times 3.5.3-beta. I can't seem to work around that, or even explain it. Anybody have ideas?