For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/
[Apr 3, 2019 4:16:59 AM] (aajisaka) HADOOP-16226. new Path(String str) does not remove all the trailing [Apr 3, 2019 6:01:30 AM] (yqlin) HDDS-1365. Fix error handling in KeyValueContainerCheck. Contributed by [Apr 3, 2019 10:35:02 AM] (aajisaka) HADOOP-16232. Fix errors in the checkstyle configration xmls. [Apr 3, 2019 1:27:28 PM] (sunilg) YARN-4901. QueueMetrics needs to be cleared before MockRM is [Apr 3, 2019 4:49:10 PM] (shashikant) HDDS-1164. Add New blockade Tests to test Replica Manager. Contributed [Apr 3, 2019 5:56:33 PM] (todd) HDFS-14394: Add -std=c99 / -std=gnu99 to libhdfs compile flags [Apr 3, 2019 6:00:59 PM] (weichiu) HDFS-10477. Stop decommission a rack of DataNodes caused NameNode fail [Apr 3, 2019 6:53:51 PM] (7813154+ajayydv) HDDS-1377. OM failed to start with incorrect hostname set as ip address [Apr 3, 2019 6:59:39 PM] (mackrorysd) HADOOP-16210. Update guava to 27.0-jre in hadoop-project trunk. [Apr 3, 2019 8:20:51 PM] (arp7) HDDS-1330 : Add a docker compose for Ozone deployment with Recon. (#669) [Apr 3, 2019 8:23:40 PM] (github) HADOOP-16233. S3AFileStatus to declare that isEncrypted() is always true [Apr 3, 2019 9:29:52 PM] (weichiu) HADOOP-16011. OsSecureRandom very slow compared to other SecureRandom [Apr 3, 2019 9:52:06 PM] (arp7) HDDS-1358 : Recon Server REST API not working as expected. (#668) [Apr 3, 2019 10:02:00 PM] (github) HDDS-1211. Test SCMChillMode failing randomly in Jenkins run (#543) [Apr 3, 2019 11:02:19 PM] (bharat) HDDS-1324. TestOzoneManagerHA tests are flaky (#676) [Apr 3, 2019 11:11:13 PM] (inigoiri) HDFS-14327. Using FQDN instead of IP to access servers with DNS -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-common-project/hadoop-kms Null passed for non-null parameter of com.google.common.base.Strings.isNullOrEmpty(String) in org.apache.hadoop.crypto.key.kms.server.KMSAudit.op(KMSAuditLogger$OpStatus, Object, UserGroupInformation, String, String, String) Method invoked at KMSAudit.java:of com.google.common.base.Strings.isNullOrEmpty(String) in org.apache.hadoop.crypto.key.kms.server.KMSAudit.op(KMSAuditLogger$OpStatus, Object, UserGroupInformation, String, String, String) Method invoked at KMSAudit.java:[line 195] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Null passed for non-null parameter of com.google.common.base.Strings.emptyToNull(String) in org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport() Method invoked at NodeHealthCheckerService.java:of com.google.common.base.Strings.emptyToNull(String) in org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport() Method invoked at NodeHealthCheckerService.java:[line 66] Null passed for non-null parameter of com.google.common.base.Strings.emptyToNull(String) in org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport() Method invoked at NodeHealthCheckerService.java:of com.google.common.base.Strings.emptyToNull(String) in org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport() Method invoked at NodeHealthCheckerService.java:[line 72] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Null passed for non-null parameter of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore, RMStateStoreEvent) At RMStateStore.java:of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore, RMStateStoreEvent) At RMStateStore.java:[line 291] Null passed for non-null parameter of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateApplicationPriority(Priority, ApplicationId, SettableFuture, UserGroupInformation) At CapacityScheduler.java:of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateApplicationPriority(Priority, ApplicationId, SettableFuture, UserGroupInformation) At CapacityScheduler.java:[line 2647] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setEvents(Map) makes inefficient use of keySet iterator instead of entrySet iterator At TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At TimelineEntityDocument.java:[line 159] org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setMetrics(Map) makes inefficient use of keySet iterator instead of entrySet iterator At TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At TimelineEntityDocument.java:[line 142] Unread field:TimelineEventSubDoc.java:[line 56] Unread field:TimelineMetricSubDoc.java:[line 44] Switch statement found in org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregate(TimelineMetric, TimelineMetric) where default case is missing At FlowRunDocument.java:TimelineMetric) where default case is missing At FlowRunDocument.java:[lines 121-136] org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregateMetrics(Map) makes inefficient use of keySet iterator instead of entrySet iterator At FlowRunDocument.java:keySet iterator instead of entrySet iterator At FlowRunDocument.java:[line 103] Possible doublecheck on org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader.client in new org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration) At CosmosDBDocumentStoreReader.java:new org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration) At CosmosDBDocumentStoreReader.java:[lines 73-75] Possible doublecheck on org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter.client in new org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration) At CosmosDBDocumentStoreWriter.java:new org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration) At CosmosDBDocumentStoreWriter.java:[lines 66-68] Failed junit tests : hadoop.ha.TestZKFailoverController hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes hadoop.tools.TestHdfsConfigFields hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks hadoop.mapreduce.v2.app.TestRuntimeEstimators hadoop.yarn.sls.TestSLSStreamAMSynth hadoop.hdds.scm.container.TestReplicationManager hadoop.ozone.ozShell.TestOzoneShell hadoop.ozone.TestMiniChaosOzoneCluster cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/diff-compile-javac-root.txt [336K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/diff-patch-pylint.txt [84K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/whitespace-tabs.txt [1.1M] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt [4.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/diff-javadoc-javadoc-root.txt [752K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [168K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [340K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [28K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-hdds_container-service.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-hdds_server-scm.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-ozone_integration-test.txt [32K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-unit-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt [4.0K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1096/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.8.0 http://yetus.apache.org
--------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org