For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/
[Aug 2, 2017 7:03:25 AM] (jianhe) YARN-6872. [Addendum patch] Ensure apps could
run given NodeLabels are
[Aug 2, 2017 11:48:06 AM] (stevel) HADOOP-14709. Fix checkstyle warnings in
ContractTestUtils. Contributed
[Aug 2, 2017 3:59:33 PM] (epayne) YARN-6846. Nodemanager can fail to fully
delete application local
[Aug 2, 2017 4:25:19 PM] (yufei) YARN-6895. [FairScheduler] Preemption
reservation may cause regular
[Aug 2, 2017 5:53:22 PM] (epayne) YARN-5349.
[Aug 2, 2017 6:25:05 PM] (mackrorysd) HADOOP-13595. Rework hadoop_usage to be
broken up by
[Aug 2, 2017 7:12:48 PM] (cdouglas) HDFS-6984. Serialize FileStatus via
protobuf.
[Aug 2, 2017 9:22:46 PM] (manojpec) HDFS-9388. Decommission related code to
support Maintenance State for
[Aug 3, 2017 1:57:10 PM] (sunilg) YARN-6678. Handle IllegalStateException in
Async Scheduling mode of
[Aug 3, 2017 4:52:35 PM] (haibochen) YARN-6674 Add memory cgroup settings for
opportunistic containers.
[Aug 3, 2017 4:56:51 PM] (haibochen) YARN-6673 Add cpu cgroup configurations
for opportunistic containers.
[Aug 3, 2017 6:33:37 PM] (yufei) YARN-6832. Tests use
assertTrue(....equals(...)) instead of
[Aug 3, 2017 6:44:34 PM] (yufei) MAPREDUCE-6914. Tests use
assertTrue(....equals(...)) instead of
[Aug 3, 2017 9:18:03 PM] (subru) YARN-6932. Fix
TestFederationRMFailoverProxyProvider test case failure.
[Aug 3, 2017 10:44:51 PM] (wang) HDFS-12131. Add some of the FSNamesystem JMX
values as metrics.
[Aug 4, 2017 4:15:40 AM] (Arun Suresh) YARN-5977. ContainerManagementProtocol
changes to support change of
[Aug 4, 2017 5:35:57 AM] (aajisaka) HADOOP-14706. Adding a helper method to
determine whether a log is Log4j
[Aug 4, 2017 10:09:08 AM] (stevel) HADOOP-14126. Remove jackson, joda and other
transient aws SDK
-1 overall
The following subsystems voted -1:
findbugs unit
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace
The following subsystems are considered long running:
(runtime bigger than 1h 0m 0s)
unit
Specific tests:
FindBugs :
module:hadoop-hdfs-project/hadoop-hdfs-client
Possible exposure of partially initialized object in
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At
DFSClient.java:object in
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At
DFSClient.java:[line 2906]
org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object)
makes inefficient use of keySet iterator instead of entrySet iterator At
SlowDiskReports.java:keySet iterator instead of entrySet iterator At
SlowDiskReports.java:[line 105]
FindBugs :
module:hadoop-hdfs-project/hadoop-hdfs
Possible null pointer dereference in
org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to
return value of called method Dereferenced at
JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus()
due to return value of called method Dereferenced at JournalNode.java:[line
302]
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String)
unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId
At HdfsServerConstants.java:[line 193]
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int)
unconditionally sets the field force At HdfsServerConstants.java:force At
HdfsServerConstants.java:[line 217]
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean)
unconditionally sets the field isForceFormat At
HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229]
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean)
unconditionally sets the field isInteractiveFormat At
HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line
237]
Possible null pointer dereference in
org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File,
int, HardLink, boolean, File, List) due to return value of called method
Dereferenced at
DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File,
File, int, HardLink, boolean, File, List) due to return value of called method
Dereferenced at DataStorage.java:[line 1339]
Possible null pointer dereference in
org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
long) due to return value of called method Dereferenced at
NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
long) due to return value of called method Dereferenced at
NNStorageRetentionManager.java:[line 258]
Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line
2100]
Useless condition:numBlocks == -1 at this point At
ImageLoaderCurrent.java:[line 727]
FindBugs :
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
Useless object stored in variable removedNullContainers of method
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
At NodeStatusUpdaterImpl.java:removedNullContainers of method
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
At NodeStatusUpdaterImpl.java:[line 642]
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
makes inefficient use of keySet iterator instead of entrySet iterator At
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At
NodeStatusUpdaterImpl.java:[line 719]
Hard coded reference to an absolute pathname in
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
At DockerLinuxContainerRuntime.java:absolute pathname in
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
At DockerLinuxContainerRuntime.java:[line 490]
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
makes inefficient use of keySet iterator instead of entrySet iterator At
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At
ContainerLocalizer.java:[line 357]
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
is a mutable collection which should be package protected At
ContainerMetrics.java:which should be package protected At
ContainerMetrics.java:[line 134]
Failed junit tests :
hadoop.ipc.TestIPC
hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
hadoop.yarn.client.api.impl.TestAMRMProxy
hadoop.yarn.client.api.impl.TestNMClient
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities
hadoop.fs.adl.TestGetFileStatus
Timed out junit tests :
org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA
org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels
cc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/diff-compile-cc-root.txt
[4.0K]
javac:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/diff-compile-javac-root.txt
[324K]
checkstyle:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/diff-checkstyle-root.txt
[17M]
pylint:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/diff-patch-pylint.txt
[20K]
shellcheck:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/diff-patch-shellcheck.txt
[20K]
shelldocs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/diff-patch-shelldocs.txt
[12K]
whitespace:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/whitespace-eol.txt
[11M]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/whitespace-tabs.txt
[1.2M]
findbugs:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
[8.0K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
[12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
[12K]
javadoc:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/diff-javadoc-javadoc-root.txt
[1.9M]
unit:
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
[148K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
[248K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
[64K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
[52K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt
[12K]
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/483/artifact/out/patch-unit-hadoop-tools_hadoop-azure-datalake.txt
[16K]
Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]