For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/

[Jul 18, 2017 12:35:08 PM] (rchiang) YARN-6798. Fix NM startup failure with old 
state store due to version
[Jul 18, 2017 2:23:41 PM] (jlowe) HADOOP-14637. GenericTestUtils.waitFor needs 
to check condition again
[Jul 18, 2017 4:38:07 PM] (yufei) YARN-6778. In ResourceWeights, weights and 
setWeights() should be final.
[Jul 18, 2017 10:40:52 PM] (rohithsharmaks) YARN-6819. Application report fails 
if app rejected due to nodesize.
[Jul 19, 2017 12:13:06 AM] (jitendra) HADOOP-14642. wasb: add support for 
caching Authorization and SASKeys.




-1 overall


The following subsystems voted -1:
    findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
    cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
    unit


Specific tests:

    FindBugs :

       module:hadoop-hdfs-project/hadoop-hdfs-client 
       Possible exposure of partially initialized object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:[line 2888] 
       org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:[line 105] 

    FindBugs :

       module:hadoop-hdfs-project/hadoop-hdfs 
       Possible null pointer dereference in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to 
return value of called method Dereferenced at 
JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus()
 due to return value of called method Dereferenced at JournalNode.java:[line 
302] 
       
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String)
 unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId 
At HdfsServerConstants.java:[line 193] 
       
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int)
 unconditionally sets the field force At HdfsServerConstants.java:force At 
HdfsServerConstants.java:[line 217] 
       
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean)
 unconditionally sets the field isForceFormat At 
HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] 
       
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean)
 unconditionally sets the field isInteractiveFormat At 
HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 
237] 
       Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, 
int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at 
DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File,
 File, int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at DataStorage.java:[line 1339] 
       Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:[line 258] 
       Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, 
BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path,
 BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:[line 133] 
       Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 
2085] 
       Useless condition:numBlocks == -1 at this point At 
ImageLoaderCurrent.java:[line 727] 

    FindBugs :

       
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 
       Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 642] 
       
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 719] 
       Hard coded reference to an absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:absolute pathname in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
 At DockerLinuxContainerRuntime.java:[line 455] 
       
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 357] 
       
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 

    Failed junit tests :

       hadoop.fs.shell.TestCopyFromLocal 
       hadoop.fs.sftp.TestSFTPFileSystem 
       hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
       hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
       hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
       hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
       hadoop.yarn.server.TestContainerManagerSecurity 
       hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
       hadoop.yarn.client.api.impl.TestNMClient 
       hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken 

    Timed out junit tests :

       org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
       org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
       
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
      

   cc:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/whitespace-eol.txt
  [12M]
       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/whitespace-tabs.txt
  [1.2M]

   findbugs:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
  [8.0K]
       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
  [16K]
       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
  [12K]

   javadoc:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [152K]
       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [344K]
       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [44K]
       
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/469/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]

Powered by Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org

Reply via email to