See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/810/changes
Changes: [sharad] HADOOP-5691. Makes org.apache.hadoop.mapreduce.Reducer concrete class. Contributed by Amareshwari. [ddas] HADOOP-5646. Fixes a problem in TestQueueCapacities. Contributed by Vinod Kumar Vavilapalli. [sharad] HADOOP-5647. Fix TestJobHistory to not depend on /tmp. Contributed by Ravi Gummadi. [sharad] HADOOP-5533. Reverted in 0.20 as branch is frozen, vote being out for 0.20 release. [sharad] HADOOP-5533. Recovery duration shown on the jobtracker webpage is inaccurate. Contributed by Amar Kamat. [hairong] HADOOP-5638. More improvement on block placement performance. Contributed by Hairong Kuang. [hairong] HADOOP-5655. TestMRServerPorts fails on java.net.BindException. Contributed by Devaraj Das. [yhemanth] HADOOP-4490. Provide ability to run tasks as job owners. Contributed by Sreekanth Ramakrishnan. [yhemanth] HADOOP-5396. Provide ability to refresh queue ACLs in the JobTracker without having to restart the daemon. Contributed by Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli. ------------------------------------------ [...truncated 437438 lines...] [junit] 2009-04-17 19:16:34,789 INFO datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 49847 [junit] 2009-04-17 19:16:34,789 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s [junit] 2009-04-17 19:16:34,789 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1240009014789 with interval 21600000 [junit] 2009-04-17 19:16:34,791 INFO http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 44201 [junit] 2009-04-17 19:16:34,791 INFO mortbay.log (?:invoke0(?)) - jetty-6.1.14 [junit] 2009-04-17 19:16:34,862 INFO mortbay.log (?:invoke0(?)) - Started selectchannelconnec...@localhost:44201 [junit] 2009-04-17 19:16:34,863 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized [junit] 2009-04-17 19:16:34,864 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=55657 [junit] 2009-04-17 19:16:34,865 INFO ipc.Server (Server.java:run(471)) - IPC Server Responder: starting [junit] 2009-04-17 19:16:34,865 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 55657: starting [junit] 2009-04-17 19:16:34,865 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 55657: starting [junit] 2009-04-17 19:16:34,866 INFO datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:49847, storageID=, infoPort=44201, ipcPort=55657) [junit] 2009-04-17 19:16:34,866 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 55657: starting [junit] 2009-04-17 19:16:34,867 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 55657: starting [junit] 2009-04-17 19:16:34,867 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:49847 storage DS-390729134-67.195.138.9-49847-1239995794866 [junit] 2009-04-17 19:16:34,869 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:49847 [junit] 2009-04-17 19:16:34,871 INFO datanode.DataNode (DataNode.java:register(554)) - New storage id DS-390729134-67.195.138.9-49847-1239995794866 is assigned to data-node 127.0.0.1:49847 [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4 [junit] 2009-04-17 19:16:34,871 INFO datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:49847, storageID=DS-390729134-67.195.138.9-49847-1239995794866, infoPort=44201, ipcPort=55657)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} [junit] 2009-04-17 19:16:34,872 INFO datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec [junit] 2009-04-17 19:16:34,880 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3 is not formatted. [junit] 2009-04-17 19:16:34,881 INFO common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ... [junit] 2009-04-17 19:16:34,893 INFO common.Storage (DataStorage.java:recoverTransitionRead(123)) - Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4 is not formatted. [junit] 2009-04-17 19:16:34,893 INFO common.Storage (DataStorage.java:recoverTransitionRead(124)) - Formatting ... [junit] 2009-04-17 19:16:34,910 INFO datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs [junit] 2009-04-17 19:16:34,911 INFO datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner. [junit] 2009-04-17 19:16:34,925 INFO datanode.DataNode (FSDataset.java:registerMBean(1417)) - Registered FSDatasetStatusMBean [junit] 2009-04-17 19:16:34,926 INFO datanode.DataNode (DataNode.java:startDataNode(317)) - Opened info server at 42990 [junit] 2009-04-17 19:16:34,926 INFO datanode.DataNode (DataXceiverServer.java:<init>(74)) - Balancing bandwith is 1048576 bytes/s [junit] 2009-04-17 19:16:34,927 INFO datanode.DirectoryScanner (DirectoryScanner.java:<init>(133)) - scan starts at 1240015733927 with interval 21600000 [junit] 2009-04-17 19:16:34,928 INFO http.HttpServer (HttpServer.java:start(454)) - Jetty bound to port 33659 [junit] 2009-04-17 19:16:34,929 INFO mortbay.log (?:invoke0(?)) - jetty-6.1.14 [junit] 2009-04-17 19:16:34,997 INFO mortbay.log (?:invoke0(?)) - Started selectchannelconnec...@localhost:33659 [junit] 2009-04-17 19:16:34,997 INFO jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized [junit] 2009-04-17 19:16:34,999 INFO metrics.RpcMetrics (RpcMetrics.java:<init>(58)) - Initializing RPC Metrics with hostName=DataNode, port=39678 [junit] 2009-04-17 19:16:35,000 INFO ipc.Server (Server.java:run(471)) - IPC Server Responder: starting [junit] 2009-04-17 19:16:35,000 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 1 on 39678: starting [junit] 2009-04-17 19:16:35,000 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 0 on 39678: starting [junit] 2009-04-17 19:16:35,000 INFO ipc.Server (Server.java:run(313)) - IPC Server listener on 39678: starting [junit] 2009-04-17 19:16:35,001 INFO ipc.Server (Server.java:run(934)) - IPC Server handler 2 on 39678: starting [junit] 2009-04-17 19:16:35,000 INFO datanode.DataNode (DataNode.java:startDataNode(396)) - dnRegistration = DatanodeRegistration(vesta.apache.org:42990, storageID=, infoPort=33659, ipcPort=39678) [junit] 2009-04-17 19:16:35,004 INFO hdfs.StateChange (FSNamesystem.java:registerDatanode(2084)) - BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:42990 storage DS-1632591988-67.195.138.9-42990-1239995795003 [junit] 2009-04-17 19:16:35,005 INFO net.NetworkTopology (NetworkTopology.java:add(327)) - Adding a new node: /default-rack/127.0.0.1:42990 [junit] 2009-04-17 19:16:35,007 INFO datanode.DataNode (DataNode.java:register(554)) - New storage id DS-1632591988-67.195.138.9-42990-1239995795003 is assigned to data-node 127.0.0.1:42990 [junit] 2009-04-17 19:16:35,008 INFO datanode.DataNode (DataNode.java:run(1214)) - DatanodeRegistration(127.0.0.1:42990, storageID=DS-1632591988-67.195.138.9-42990-1239995795003, infoPort=33659, ipcPort=39678)In DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} [junit] 2009-04-17 19:16:35,014 INFO datanode.DataNode (DataNode.java:offerService(696)) - using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec [junit] 2009-04-17 19:16:35,035 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); [junit] 2009-04-17 19:16:35,043 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); [junit] 2009-04-17 19:16:35,052 INFO datanode.DataNode (DataNode.java:blockReport(925)) - BlockReport of 0 blocks got processed in 1 msecs [junit] 2009-04-17 19:16:35,052 INFO datanode.DataNode (DataNode.java:offerService(739)) - Starting Periodic block scanner. [junit] 2009-04-17 19:16:35,055 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); [junit] 2009-04-17 19:16:35,055 INFO FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110)) - ugi=hudson,hudson ip=/127.0.0.1 cmd=create src=/test dst=null perm=hudson:supergroup:rw-r--r-- [junit] 2009-04-17 19:16:35,059 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_7266987400406218604_1001 [junit] 2009-04-17 19:16:35,062 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_7266987400406218604_1001 src: /127.0.0.1:46297 dest: /127.0.0.1:42990 [junit] 2009-04-17 19:16:35,063 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_7266987400406218604_1001 src: /127.0.0.1:39724 dest: /127.0.0.1:49847 [junit] 2009-04-17 19:16:35,065 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:39724, dest: /127.0.0.1:49847, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1998558276, offset: 0, srvID: DS-390729134-67.195.138.9-49847-1239995794866, blockid: blk_7266987400406218604_1001 [junit] 2009-04-17 19:16:35,066 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:49847 is added to blk_7266987400406218604_1001 size 4096 [junit] 2009-04-17 19:16:35,066 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_7266987400406218604_1001 terminating [junit] 2009-04-17 19:16:35,066 INFO DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:46297, dest: /127.0.0.1:42990, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1998558276, offset: 0, srvID: DS-1632591988-67.195.138.9-42990-1239995795003, blockid: blk_7266987400406218604_1001 [junit] 2009-04-17 19:16:35,067 INFO datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_7266987400406218604_1001 terminating [junit] 2009-04-17 19:16:35,067 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:42990 is added to blk_7266987400406218604_1001 size 4096 [junit] 2009-04-17 19:16:35,069 INFO hdfs.StateChange (FSNamesystem.java:allocateBlock(1479)) - BLOCK* NameSystem.allocateBlock: /test. blk_-6102255395417213207_1001 [junit] 2009-04-17 19:16:35,070 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-6102255395417213207_1001 src: /127.0.0.1:46299 dest: /127.0.0.1:42990 [junit] 2009-04-17 19:16:35,071 INFO datanode.DataNode (DataXceiver.java:writeBlock(228)) - Receiving block blk_-6102255395417213207_1001 src: /127.0.0.1:39726 dest: /127.0.0.1:49847 [junit] 2009-04-17 19:16:35,073 INFO DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805)) - src: /127.0.0.1:39726, dest: /127.0.0.1:49847, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1998558276, offset: 0, srvID: DS-390729134-67.195.138.9-49847-1239995794866, blockid: blk_-6102255395417213207_1001 [junit] 2009-04-17 19:16:35,073 INFO datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829)) - PacketResponder 0 for block blk_-6102255395417213207_1001 terminating [junit] 2009-04-17 19:16:35,073 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:49847 is added to blk_-6102255395417213207_1001 size 4096 [junit] 2009-04-17 19:16:35,074 INFO DataNode.clienttrace (BlockReceiver.java:run(929)) - src: /127.0.0.1:46299, dest: /127.0.0.1:42990, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1998558276, offset: 0, srvID: DS-1632591988-67.195.138.9-42990-1239995795003, blockid: blk_-6102255395417213207_1001 [junit] 2009-04-17 19:16:35,074 INFO datanode.DataNode (BlockReceiver.java:run(993)) - PacketResponder 1 for block blk_-6102255395417213207_1001 terminating [junit] 2009-04-17 19:16:35,075 INFO hdfs.StateChange (FSNamesystem.java:addStoredBlock(3086)) - BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:42990 is added to blk_-6102255395417213207_1001 size 4096 [junit] 2009-04-17 19:16:35,076 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); [junit] init: server=localhost;port=;service=DataNode;localVMUrl=null [junit] [junit] Domains: [junit] Domain = JMImplementation [junit] Domain = com.sun.management [junit] Domain = hadoop [junit] Domain = java.lang [junit] Domain = java.util.logging [junit] [junit] MBeanServer default domain = DefaultDomain [junit] [junit] MBean count = 26 [junit] [junit] Query MBeanServer MBeans: [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId1704640536 [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId745608920 [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1623638996 [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-2024658520 [junit] 2009-04-17 19:16:35,076 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort39678 [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort55657 [junit] Info: key = bytes_written; val = 0 [junit] Shutting down the Mini HDFS Cluster [junit] Shutting down DataNode 1 [junit] 2009-04-17 19:16:35,179 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 39678 [junit] 2009-04-17 19:16:35,180 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 39678: exiting [junit] 2009-04-17 19:16:35,180 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 39678: exiting [junit] 2009-04-17 19:16:35,180 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 39678: exiting [junit] 2009-04-17 19:16:35,180 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:42990, storageID=DS-1632591988-67.195.138.9-42990-1239995795003, infoPort=33659, ipcPort=39678):DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) [junit] at java.lang.Thread.run(Thread.java:619) [junit] [junit] 2009-04-17 19:16:35,180 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 39678 [junit] 2009-04-17 19:16:35,180 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder [junit] 2009-04-17 19:16:35,181 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2009-04-17 19:16:35,181 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread. [junit] 2009-04-17 19:16:35,182 INFO datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:42990, storageID=DS-1632591988-67.195.138.9-42990-1239995795003, infoPort=33659, ipcPort=39678):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'} [junit] 2009-04-17 19:16:35,182 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 39678 [junit] 2009-04-17 19:16:35,182 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0 [junit] Shutting down DataNode 0 [junit] 2009-04-17 19:16:35,283 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 55657 [junit] 2009-04-17 19:16:35,283 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 55657: exiting [junit] 2009-04-17 19:16:35,284 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 55657: exiting [junit] 2009-04-17 19:16:35,284 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 55657: exiting [junit] 2009-04-17 19:16:35,284 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder [junit] 2009-04-17 19:16:35,284 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 55657 [junit] 2009-04-17 19:16:35,284 WARN datanode.DataNode (DataXceiverServer.java:run(137)) - DatanodeRegistration(127.0.0.1:49847, storageID=DS-390729134-67.195.138.9-49847-1239995794866, infoPort=44201, ipcPort=55657):DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) [junit] at java.lang.Thread.run(Thread.java:619) [junit] [junit] 2009-04-17 19:16:35,284 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2009-04-17 19:16:35,285 INFO datanode.DataBlockScanner (DataBlockScanner.java:run(618)) - Exiting DataBlockScanner thread. [junit] 2009-04-17 19:16:35,286 INFO datanode.DataNode (DataNode.java:run(1234)) - DatanodeRegistration(127.0.0.1:49847, storageID=DS-390729134-67.195.138.9-49847-1239995794866, infoPort=44201, ipcPort=55657):Finishing DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'} [junit] 2009-04-17 19:16:35,286 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 55657 [junit] 2009-04-17 19:16:35,286 INFO datanode.DataNode (DataNode.java:shutdown(604)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2009-04-17 19:16:35,387 WARN namenode.FSNamesystem (FSNamesystem.java:run(2359)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted [junit] 2009-04-17 19:16:35,387 INFO namenode.FSNamesystem (FSEditLog.java:printStatistics(1082)) - Number of transactions: 3 Total time for transactions(ms): 3Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 10 0 [junit] 2009-04-17 19:16:35,387 WARN namenode.DecommissionManager (DecommissionManager.java:run(67)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted [junit] 2009-04-17 19:16:35,389 INFO namenode.FSNamesystem (FSEditLog.java:processIOError(471)) - current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS); [junit] 2009-04-17 19:16:35,389 INFO ipc.Server (Server.java:stop(1098)) - Stopping server on 54034 [junit] 2009-04-17 19:16:35,389 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 1 on 54034: exiting [junit] 2009-04-17 19:16:35,389 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 0 on 54034: exiting [junit] 2009-04-17 19:16:35,390 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 2 on 54034: exiting [junit] 2009-04-17 19:16:35,390 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 3 on 54034: exiting [junit] 2009-04-17 19:16:35,390 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 6 on 54034: exiting [junit] 2009-04-17 19:16:35,390 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 5 on 54034: exiting [junit] 2009-04-17 19:16:35,390 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 4 on 54034: exiting [junit] 2009-04-17 19:16:35,391 INFO ipc.Server (Server.java:run(536)) - Stopping IPC Server Responder [junit] 2009-04-17 19:16:35,391 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 8 on 54034: exiting [junit] 2009-04-17 19:16:35,391 INFO ipc.Server (Server.java:run(352)) - Stopping IPC Server listener on 54034 [junit] 2009-04-17 19:16:35,391 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 9 on 54034: exiting [junit] 2009-04-17 19:16:35,391 INFO ipc.Server (Server.java:run(992)) - IPC Server handler 7 on 54034: exiting [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 3.994 sec [junit] Running org.apache.hadoop.util.TestCyclicIteration [junit] [junit] [junit] integers=[] [junit] map={} [junit] start=-1, iteration=[] [junit] [junit] [junit] integers=[0] [junit] map={0=0} [junit] start=-1, iteration=[0] [junit] start=0, iteration=[0] [junit] start=1, iteration=[0] [junit] [junit] [junit] integers=[0, 2] [junit] map={0=0, 2=2} [junit] start=-1, iteration=[0, 2] [junit] start=0, iteration=[2, 0] [junit] start=1, iteration=[2, 0] [junit] start=2, iteration=[0, 2] [junit] start=3, iteration=[0, 2] [junit] [junit] [junit] integers=[0, 2, 4] [junit] map={0=0, 2=2, 4=4} [junit] start=-1, iteration=[0, 2, 4] [junit] start=0, iteration=[2, 4, 0] [junit] start=1, iteration=[2, 4, 0] [junit] start=2, iteration=[4, 0, 2] [junit] start=3, iteration=[4, 0, 2] [junit] start=4, iteration=[0, 2, 4] [junit] start=5, iteration=[0, 2, 4] [junit] [junit] [junit] integers=[0, 2, 4, 6] [junit] map={0=0, 2=2, 4=4, 6=6} [junit] start=-1, iteration=[0, 2, 4, 6] [junit] start=0, iteration=[2, 4, 6, 0] [junit] start=1, iteration=[2, 4, 6, 0] [junit] start=2, iteration=[4, 6, 0, 2] [junit] start=3, iteration=[4, 6, 0, 2] [junit] start=4, iteration=[6, 0, 2, 4] [junit] start=5, iteration=[6, 0, 2, 4] [junit] start=6, iteration=[0, 2, 4, 6] [junit] start=7, iteration=[0, 2, 4, 6] [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.093 sec [junit] Running org.apache.hadoop.util.TestGenericsUtil [junit] 2009-04-17 19:16:36,350 WARN conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively [junit] 2009-04-17 19:16:36,363 WARN util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377)) - options parsing failed: Missing argument for option:jt [junit] usage: general options are: [junit] -archives <paths> comma separated archives to be unarchived [junit] on the compute machines. [junit] -conf <configuration file> specify an application configuration file [junit] -D <property=value> use value for given property [junit] -files <paths> comma separated files to be copied to the [junit] map reduce cluster [junit] -fs <local|namenode:port> specify a namenode [junit] -jt <local|jobtracker:port> specify a job tracker [junit] -libjars <paths> comma separated jar files to include in the [junit] classpath. [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.184 sec [junit] Running org.apache.hadoop.util.TestIndexedSort [junit] sortRandom seed: 7051833450906176865(org.apache.hadoop.util.QuickSort) [junit] testSorted seed: 6202609412681042081(org.apache.hadoop.util.QuickSort) [junit] testAllEqual setting min/max at 217/3(org.apache.hadoop.util.QuickSort) [junit] sortWritable seed: -2489818280100611772(org.apache.hadoop.util.QuickSort) [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort) [junit] sortRandom seed: -3518012158898894371(org.apache.hadoop.util.HeapSort) [junit] testSorted seed: 7442652760225698008(org.apache.hadoop.util.HeapSort) [junit] testAllEqual setting min/max at 49/219(org.apache.hadoop.util.HeapSort) [junit] sortWritable seed: -5501590211969917853(org.apache.hadoop.util.HeapSort) [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.097 sec [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree [junit] 2009-04-17 19:16:38,231 INFO util.ProcessTree (ProcessTree.java:isSetsidSupported(54)) - setsid exited with exit code 0 [junit] 2009-04-17 19:16:38,737 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141)) - Root process pid: 22042 [junit] 2009-04-17 19:16:38,786 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146)) - ProcessTree: [ 22042 22044 22045 ] [junit] 2009-04-17 19:16:45,320 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159)) - ProcessTree: [ 22058 22042 22056 22044 22046 22050 22048 22054 22052 ] [junit] 2009-04-17 19:16:45,333 INFO util.ProcessTree (ProcessTree.java:destroyProcessGroup(160)) - Killing all processes in the process group 22042 with SIGTERM. Exit code 0 [junit] 2009-04-17 19:16:45,333 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64)) - Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: [junit] 2009-04-17 19:16:45,334 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70)) - Exit code: 143 [junit] 2009-04-17 19:16:45,417 INFO util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173)) - RogueTaskThread successfully joined. [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.275 sec [junit] Running org.apache.hadoop.util.TestReflectionUtils [junit] 2009-04-17 19:16:46,334 WARN conf.Configuration (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.616 sec [junit] Running org.apache.hadoop.util.TestShell [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.184 sec [junit] Running org.apache.hadoop.util.TestStringUtils [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec BUILD FAILED http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed! Total time: 168 minutes 47 seconds Publishing Javadoc Recording test results Recording fingerprints Publishing Clover coverage report...