See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/172/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE 
###########################
[...truncated 7342 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 45.792 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  02:51 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.064 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:52 h
[INFO] Finished at: 2015-05-02T15:02:23+00:00
[INFO] Final Memory: 52M/266M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #146
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 668182 bytes
Compression is 0.0%
Took 16 sec
Recording test results
Updating HADOOP-11491
Updating HADOOP-11889
Updating YARN-2893
Updating MAPREDUCE-6345
Updating YARN-3006
Updating YARN-3363
Updating HADOOP-11900
Updating HDFS-8229
Updating HDFS-8276
Updating HDFS-7281
Updating HDFS-8086
Updating HDFS-8091
Updating HDFS-8213
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) 
##############################
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST

Error Message:
dir has ERROR

Stack Trace:
java.lang.IllegalStateException: dir has ERROR
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.checkErrorState(TestAppendSnapshotTruncate.java:429)
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.stop(TestAppendSnapshotTruncate.java:483)
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST(TestAppendSnapshotTruncate.java:127)
Caused by: java.lang.IllegalStateException: file02 has ERROR
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.checkErrorState(TestAppendSnapshotTruncate.java:429)
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.isPaused(TestAppendSnapshotTruncate.java:471)
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.pauseAllFiles(TestAppendSnapshotTruncate.java:251)
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:220)
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:139)
        at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker$1.run(TestAppendSnapshotTruncate.java:454)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: Failed to replace a bad datanode on the 
existing pipeline due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:48980,DS-db576957-446b-4b9e-9793-d54027bb34df,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:38166,DS-39218fe6-a6d0-4d7b-acd2-7e878eea1e97,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:38166,DS-39218fe6-a6d0-4d7b-acd2-7e878eea1e97,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:48980,DS-db576957-446b-4b9e-9793-d54027bb34df,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
        at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1065)
        at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1120)
        at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1267)
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:462)


Reply via email to