See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/647/
################################################################################### ########################## LAST 60 LINES OF THE CONSOLE ########################### [...truncated 720246 lines...] [junit] [junit] 2011-04-25 12:25:02,562 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-04-25 12:25:02,563 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread. [junit] 2011-04-25 12:25:02,563 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:51863, storageID=DS-1888970459-127.0.1.1-51863-1303734302020, infoPort=38774, ipcPort=41387):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data3/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current/finalized'} [junit] 2011-04-25 12:25:02,563 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 41387 [junit] 2011-04-25 12:25:02,563 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-04-25 12:25:02,563 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-04-25 12:25:02,564 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-04-25 12:25:02,564 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down. [junit] 2011-04-25 12:25:02,564 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(835)) - Shutting down DataNode 0 [junit] 2011-04-25 12:25:02,665 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 51509 [junit] 2011-04-25 12:25:02,665 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 51509: exiting [junit] 2011-04-25 12:25:02,665 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] 2011-04-25 12:25:02,665 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 51509 [junit] 2011-04-25 12:25:02,666 WARN datanode.DataNode (DataXceiverServer.java:run(142)) - DatanodeRegistration(127.0.0.1:38741, storageID=DS-1739465286-127.0.1.1-38741-1303734301890, infoPort=51291, ipcPort=51509):DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:135) [junit] at java.lang.Thread.run(Thread.java:662) [junit] [junit] 2011-04-25 12:25:02,665 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2011-04-25 12:25:02,819 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(624)) - Exiting DataBlockScanner thread. [junit] 2011-04-25 12:25:02,820 INFO datanode.DataNode (DataNode.java:run(1497)) - DatanodeRegistration(127.0.0.1:38741, storageID=DS-1739465286-127.0.1.1-38741-1303734301890, infoPort=51291, ipcPort=51509):Finishing DataNode in: FSDataset{dirpath='/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data1/current/finalized,/grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current/finalized'} [junit] 2011-04-25 12:25:02,820 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 51509 [junit] 2011-04-25 12:25:02,820 INFO datanode.DataNode (DataNode.java:shutdown(791)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-04-25 12:25:02,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-04-25 12:25:02,820 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-04-25 12:25:02,821 WARN datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(130)) - AsyncDiskService has already shut down. [junit] 2011-04-25 12:25:02,922 WARN namenode.FSNamesystem (FSNamesystem.java:run(2965)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted [junit] 2011-04-25 12:25:02,922 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted [junit] 2011-04-25 12:25:02,922 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 8 3 [junit] 2011-04-25 12:25:02,924 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 47980 [junit] 2011-04-25 12:25:02,924 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 47980: exiting [junit] 2011-04-25 12:25:02,924 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 47980 [junit] 2011-04-25 12:25:02,924 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 97.861 sec checkfailure: [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/testsfailed BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:747: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:505: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/src/test/aop/build/aop.xml:230: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:688: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:662: The following error occurred while executing this line: /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:730: Tests failed! Total time: 49 minutes 33 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ################################################################################### ############################## FAILED TESTS (if any) ############################## 3 tests failed. REGRESSION: org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_29 Error Message: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:36136], original=[127.0.0.1:36136] Stack Trace: java.io.IOException: Failed to add a datanode: nodes.length != original.length + 1, nodes=[127.0.0.1:36136], original=[127.0.0.1:36136] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:768) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:824) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:918) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:731) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:415) REGRESSION: org.apache.hadoop.hdfs.server.namenode.TestNodeCount.testNodeCount Error Message: null Stack Trace: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.namenode.BlockManager.countNodes(BlockManager.java:1433) at org.apache.hadoop.hdfs.server.namenode.TestNodeCount.__CLR3_0_29bdgm6yrj(TestNodeCount.java:132) at org.apache.hadoop.hdfs.server.namenode.TestNodeCount.testNodeCount(TestNodeCount.java:40) FAILED: org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker.testCheckThatNameNodeResourceMonitorIsRunning Error Message: NN should be in safe mode after resources crossed threshold Stack Trace: junit.framework.AssertionFailedError: NN should be in safe mode after resources crossed threshold at org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker.__CLR3_0_2anms6eqpv(TestNameNodeResourceChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourceChecker.testCheckThatNameNodeResourceMonitorIsRunning(TestNameNodeResourceChecker.java:101)