See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2988/
################################################################################### ########################## LAST 60 LINES OF THE CONSOLE ########################### [...truncated 5429 lines...] [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project --- [INFO] Executing tasks main: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir [INFO] Executed tasks [INFO] [INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project --- [INFO] Skipping javadoc generation [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project --- [INFO] [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project --- [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:12 min] [INFO] Apache Hadoop HDFS ................................ FAILURE [ 04:25 h] [INFO] Apache Hadoop HDFS Native Client .................. SKIPPED [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.080 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 04:29 h [INFO] Finished at: 2016-04-04T07:16:24+00:00 [INFO] Final Memory: 56M/585M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures. [ERROR] [ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn <goals> -rf :hadoop-hdfs Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Sending e-mails to: hdfs-dev@hadoop.apache.org Email was triggered for: Failure - Any Sending email for trigger: Failure - Any ################################################################################### ############################## FAILED TESTS (if any) ############################## 4 tests failed. FAILED: org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends Error Message: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:33229,DS-592ed55c-8b53-498d-b309-bda6410ee839,DISK], DatanodeInfoWithStorage[127.0.0.1:37354,DS-2adc497a-6c76-492f-aa3f-0fdcbca2ab0f,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:33229,DS-592ed55c-8b53-498d-b309-bda6410ee839,DISK], DatanodeInfoWithStorage[127.0.0.1:37354,DS-2adc497a-6c76-492f-aa3f-0fdcbca2ab0f,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. Stack Trace: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:33229,DS-592ed55c-8b53-498d-b309-bda6410ee839,DISK], DatanodeInfoWithStorage[127.0.0.1:37354,DS-2adc497a-6c76-492f-aa3f-0fdcbca2ab0f,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:33229,DS-592ed55c-8b53-498d-b309-bda6410ee839,DISK], DatanodeInfoWithStorage[127.0.0.1:37354,DS-2adc497a-6c76-492f-aa3f-0fdcbca2ab0f,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration. at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162) at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599) FAILED: org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas Error Message: Expected: is <DISK> but: was <RAM_DISK> Stack Trace: java.lang.AssertionError: Expected: is <DISK> but: was <RAM_DISK> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.junit.Assert.assertThat(Assert.java:865) at org.junit.Assert.assertThat(Assert.java:832) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.LazyPersistTestCase.ensureFileReplicasOnStorageType(LazyPersistTestCase.java:141) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery.testDnRestartWithSavedReplicas(TestLazyPersistReplicaRecovery.java:53) FAILED: org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1] Error Message: logging edit without syncing should do not affect txid expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: logging edit without syncing should do not affect txid expected:<1> but was:<2> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs(TestEditLog.java:594) FAILED: org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS Error Message: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Connection reset)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":40739; Stack Trace: java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Connection reset)]; Host Details : local host is: "asf909.gq1.ygridcore.net/67.195.81.153"; destination host is: "localhost":40739; at java.net.SocketInputStream.read(SocketInputStream.java:196) at java.net.SocketInputStream.read(SocketInputStream.java:122) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) at sun.security.krb5.internal.TCPClient.readFully(NetClient.java:132) at sun.security.krb5.internal.TCPClient.receive(NetClient.java:84) at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:390) at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:343) at java.security.AccessController.doPrivileged(Native Method) at sun.security.krb5.KdcComm.send(KdcComm.java:327) at sun.security.krb5.KdcComm.send(KdcComm.java:219) at sun.security.krb5.KdcComm.send(KdcComm.java:191) at sun.security.krb5.KrbTgsReq.send(KrbTgsReq.java:187) at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202) at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:311) at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:115) at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:449) at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193) at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:411) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:565) at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:378) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:750) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:746) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1742) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:745) at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:378) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1413) at org.apache.hadoop.ipc.Client.call(Client.java:1328) at org.apache.hadoop.ipc.Client.call(Client.java:1306) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy25.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:536) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) at com.sun.proxy.$Proxy26.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2302) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2277) at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1079) at org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1076) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1076) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1069) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1909) at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.createDirectoriesSecurely(TestRollingFileSystemSinkWithSecureHdfs.java:206) at org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs.testWithSecureHDFS(TestRollingFileSystemSinkWithSecureHdfs.java:95)