Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-12-29 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/551/

No changes




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetus 0.8.0   http://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Resolved] (HADOOP-16780) Track unstable tests according to aarch CI due to OOM

2019-12-29 Thread liusheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng resolved HADOOP-16780.
---
Resolution: Fixed

> Track unstable tests according to aarch CI due to OOM
> -
>
> Key: HADOOP-16780
> URL: https://issues.apache.org/jira/browse/HADOOP-16780
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: liusheng
>Priority: Major
>
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestDFSClientRetries.testLeaseRenewSocketTimeout|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestDFSClientRetries/testLeaseRenewSocketTimeout/]|1.9
>  sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksum1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksum/testStripedFileChecksum1/]|2
>  min 53 sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksum3|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksum/testStripedFileChecksum3/]|2
>  min 13 sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery4|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksum/testStripedFileChecksumWithMissedDataBlocksRangeQuery4/]|1
>  min 32 sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery5|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksum/testStripedFileChecksumWithMissedDataBlocksRangeQuery5/]|11
>  sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery6|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksum/testStripedFileChecksumWithMissedDataBlocksRangeQuery6/]|3.8
>  sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery7|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksum/testStripedFileChecksumWithMissedDataBlocksRangeQuery7/]|7.7
>  sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery8|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksum/testStripedFileChecksumWithMissedDataBlocksRangeQuery8/]|3.7
>  sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksum.testStripedFileChecksumWithMissedDataBlocksRangeQuery9|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksum/testStripedFileChecksumWithMissedDataBlocksRangeQuery9/]|4
>  sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> [org.apache.hadoop.hdfs.TestFileChecksumCompositeCrc.testStripedFileChecksumWithMissedDataBlocksRangeQuery6|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/testReport/org.apache.hadoop.hdfs/TestFileChecksumCompositeCrc/testStripedFileChecksumWithMissedDataBlocksRangeQuery6/]|10
>  sec|[1|https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/55/]|
> |!https://builds.apache.org/static/8ad18952/images/16x16/document_add.png!  
> 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-12-29 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1365/

[Dec 28, 2019 12:32:15 PM] (tasanuma) HDFS-14934. [SBN Read] Standby NN throws 
many InterruptedExceptions when




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap 
   hadoop.hdfs.server.namenode.TestRedudantBlocks 
   hadoop.hdfs.TestDatanodeRegistration 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps 
   hadoop.yarn.server.timelineservice.storage.TestTimelineWriterHBaseDown 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema 
   
hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities