[ 
https://issues.apache.org/jira/browse/HADOOP-2040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12541356
 ] 

stack commented on HADOOP-2040:
-------------------------------

Hudson is hung.  Here is tail of log.

{code}
    [junit] 2007-11-09 08:20:49,385 DEBUG [main] 
org.apache.hadoop.hbase.TestLogRolling.countLogFiles(TestLogRolling.java:174): 
number of log files: 1
    [junit] 2007-11-09 08:20:49,386 INFO  [main] 
org.apache.hadoop.hbase.TestLogRolling.testLogRolling(TestLogRolling.java:191): 
Finished writing. There are 1 log files. Sleeping to let cache flusher and log 
roller run
    [junit] 2007-11-09 08:20:49,386 DEBUG [main] 
org.apache.hadoop.hbase.LocalHBaseCluster.shutdown(LocalHBaseCluster.java:202): 
Shutting down HBase Cluster
    [junit] 2007-11-09 08:20:49,488 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:502): Got 
regionserver stop message
    [junit] 2007-11-09 08:20:49,488 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.Leases.close(Leases.java:109): RegionServer:0 closing 
leases
    [junit] 2007-11-09 08:20:49,489 INFO  [RegionServer:0.leaseChecker] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): RegionServer:0.leaseChecker 
exiting
    [junit] 2007-11-09 08:20:49,489 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.Leases.close(Leases.java:123): RegionServer:0 closed 
leases
    [junit] 2007-11-09 08:20:49,490 INFO  [RegionServer:0.logRoller] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): RegionServer:0.logRoller 
exiting
    [junit] 2007-11-09 08:20:49,607 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,608 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,608 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,608 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,608 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,608 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,609 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,609 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,609 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,609 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,609 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,610 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,610 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,610 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,610 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,610 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,611 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,611 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,611 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,611 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,612 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,612 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,612 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,612 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,612 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,613 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,613 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,613 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,613 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,613 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,614 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,614 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,614 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,614 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,615 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,615 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,615 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,615 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,814 WARN  [IPC Server handler 5 on 58346] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,830 DEBUG [RegionServer:0.cacheFlusher] 
org.apache.hadoop.hbase.HStore.flushCacheHelper(HStore.java:504): Added 
-1547818355/info/8261001142386214874 with sequence id 2208 and size 16.8k
    [junit] 2007-11-09 08:20:49,830 DEBUG [RegionServer:0.cacheFlusher] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:919): Finished 
memcache flush for region testLogRolling,row1025,1194596368242 in 523ms
    [junit] 2007-11-09 08:20:49,831 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.closeAllRegions(HRegionServer.java:971): 
closing region -ROOT-,,0
    [junit] 2007-11-09 08:20:49,831 INFO  
[RegionServer:0.splitOrCompactChecker] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): 
RegionServer:0.splitOrCompactChecker exiting
    [junit] 2007-11-09 08:20:49,832 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:847): Started 
memcache flush for region -ROOT-,,0. Size 0.0
    [junit] 2007-11-09 08:20:49,832 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:865): Finished 
memcache flush; empty snapshot
    [junit] 2007-11-09 08:20:49,833 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.close(HStore.java:419): closed -70236052/info
    [junit] 2007-11-09 08:20:49,833 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:402): closed -ROOT-,,0
    [junit] 2007-11-09 08:20:49,833 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.closeAllRegions(HRegionServer.java:971): 
closing region .META.,,1
    [junit] 2007-11-09 08:20:49,833 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:847): Started 
memcache flush for region .META.,,1. Size 0.0
    [junit] 2007-11-09 08:20:49,833 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:865): Finished 
memcache flush; empty snapshot
    [junit] 2007-11-09 08:20:49,833 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.close(HStore.java:419): closed 1028785192/info
    [junit] 2007-11-09 08:20:49,834 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:402): closed .META.,,1
    [junit] 2007-11-09 08:20:49,834 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.closeAllRegions(HRegionServer.java:971): 
closing region testLogRolling,,1194596277787
    [junit] 2007-11-09 08:20:49,834 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:847): Started 
memcache flush for region testLogRolling,,1194596277787. Size 0.0
    [junit] 2007-11-09 08:20:49,834 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:865): Finished 
memcache flush; empty snapshot
    [junit] 2007-11-09 08:20:49,835 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.close(HStore.java:419): closed 216611736/info
    [junit] 2007-11-09 08:20:49,835 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:402): closed 
testLogRolling,,1194596277787
    [junit] 2007-11-09 08:20:49,835 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.closeAllRegions(HRegionServer.java:971): 
closing region testLogRolling,row0513,1194596368241
    [junit] 2007-11-09 08:20:49,835 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:847): Started 
memcache flush for region testLogRolling,row0513,1194596368241. Size 0.0
    [junit] 2007-11-09 08:20:49,835 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:865): Finished 
memcache flush; empty snapshot
    [junit] 2007-11-09 08:20:49,836 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.close(HStore.java:419): closed 1463872906/info
    [junit] 2007-11-09 08:20:49,836 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:402): closed 
testLogRolling,row0513,1194596368241
    [junit] 2007-11-09 08:20:49,836 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.closeAllRegions(HRegionServer.java:971): 
closing region testLogRolling,row1025,1194596368242
    [junit] 2007-11-09 08:20:49,836 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:847): Started 
memcache flush for region testLogRolling,row1025,1194596368242. Size 0.0
    [junit] 2007-11-09 08:20:49,836 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:865): Finished 
memcache flush; empty snapshot
    [junit] 2007-11-09 08:20:49,837 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.close(HStore.java:419): closed -1547818355/info
    [junit] 2007-11-09 08:20:49,837 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:402): closed 
testLogRolling,row1025,1194596368242
    [junit] 2007-11-09 08:20:49,837 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HLog.close(HLog.java:382): closing log writer in 
/hbase/log_140.211.11.75_-2039724685788569167_58358
    [junit] 2007-11-09 08:20:49,838 WARN  [IPC Server handler 3 on 58346] 
org.apache.hadoop.dfs.ReplicationTargetChooser.chooseTarget(ReplicationTargetChooser.java:177):
 Not able to place enough replicas, still in need of 1
    [junit] 2007-11-09 08:20:49,848 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:603): telling 
master that region server is shutting down at: 140.211.11.75:58358
    [junit] 2007-11-09 08:20:49,849 DEBUG [IPC Server handler 4 on 60000] 
org.apache.hadoop.hbase.HMaster.regionServerReport(HMaster.java:1316): Region 
server 140.211.11.75:58358: MSG_REPORT_EXITING -- cancelling lease
    [junit] 2007-11-09 08:20:49,849 INFO  [IPC Server handler 4 on 60000] 
org.apache.hadoop.hbase.HMaster.cancelLease(HMaster.java:1438): Cancelling 
lease for 140.211.11.75:58358
    [junit] 2007-11-09 08:20:49,849 INFO  [IPC Server handler 4 on 60000] 
org.apache.hadoop.hbase.HMaster.regionServerReport(HMaster.java:1323): Region 
server 140.211.11.75:58358: MSG_REPORT_EXITING -- lease cancelled
    [junit] 2007-11-09 08:20:49,850 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:610): stopping 
server at: 140.211.11.75:58358
    [junit] 2007-11-09 08:20:49,977 INFO  [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegionServer$Worker.run(HRegionServer.java:920): 
worker thread exiting
    [junit] 2007-11-09 08:20:49,977 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:615): 
RegionServer:0 exiting
    [junit] 2007-11-09 08:20:50,947 INFO  [HMaster.metaScanner] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): HMaster.metaScanner exiting
    [junit] 2007-11-09 08:20:50,948 INFO  [HMaster] 
org.apache.hadoop.hbase.Leases.close(Leases.java:109): HMaster closing leases
    [junit] 2007-11-09 08:20:50,947 INFO  [HMaster.rootScanner] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): HMaster.rootScanner exiting
    [junit] 2007-11-09 08:20:50,949 INFO  [HMaster.leaseChecker] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): HMaster.leaseChecker exiting
    [junit] 2007-11-09 08:20:50,949 INFO  [HMaster] 
org.apache.hadoop.hbase.Leases.close(Leases.java:123): HMaster closed leases
    [junit] 2007-11-09 08:20:50,949 INFO  [HMaster] 
org.apache.hadoop.hbase.HMaster.run(HMaster.java:1163): HMaster main thread 
exiting
    [junit] 2007-11-09 08:20:50,949 INFO  [main] 
org.apache.hadoop.hbase.LocalHBaseCluster.shutdown(LocalHBaseCluster.java:226): 
Shutdown HMaster 1 region server(s)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2007-11-09 08:20:51,709 WARN  [DataNode: 
[/export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data4]]
 org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:617): 
java.io.IOException: java.lang.InterruptedException
    [junit]     at 
org.apache.hadoop.fs.ShellCommand.runCommand(ShellCommand.java:59)
    [junit]     at org.apache.hadoop.fs.ShellCommand.run(ShellCommand.java:42)
    [junit]     at org.apache.hadoop.fs.DU.getUsed(DU.java:52)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.getDfsUsed(FSDataset.java:299)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getDfsUsed(FSDataset.java:396)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.getDfsUsed(FSDataset.java:495)
    [junit]     at 
org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:532)
    [junit]     at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1695)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] Shutting down DataNode 0
    [junit] 2007-11-09 08:20:52,252 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:186):
 PendingReplicationMonitor thread received exception. 
java.lang.InterruptedException: sleep interrupted
    [junit] 2007-11-09 08:20:52,570 ERROR [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:839): DataXceiver: 
java.io.IOException: df: 
(/export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data3)
 not a block device, directory or mounted resource
    [junit]     at 
org.apache.hadoop.fs.ShellCommand.runCommand(ShellCommand.java:52)
    [junit]     at org.apache.hadoop.fs.ShellCommand.run(ShellCommand.java:42)
    [junit]     at org.apache.hadoop.fs.DF.getAvailable(DF.java:72)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.getAvailable(FSDataset.java:308)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getNextVolume(FSDataset.java:386)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:580)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1458)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:929)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:824)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] Exception! java.io.IOException: No such file or directory
    [junit] 2007-11-09 08:20:52,864 ERROR [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:839): DataXceiver: 
java.io.IOException: df: 
(/export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data4)
 not a block device, directory or mounted resource
    [junit]     at 
org.apache.hadoop.fs.ShellCommand.runCommand(ShellCommand.java:52)
    [junit]     at org.apache.hadoop.fs.ShellCommand.run(ShellCommand.java:42)
    [junit]     at org.apache.hadoop.fs.DF.getCapacity(DF.java:62)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.getCapacity(FSDataset.java:303)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.getAvailable(FSDataset.java:307)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getNextVolume(FSDataset.java:386)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:580)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1458)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:929)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:824)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] 2007-11-09 08:20:52,864 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataTransfer.run(DataNode.java:1668): Failed to 
transfer blk_-379738272651084333 to 127.0.0.1:50011 got java.io.IOException: 
operation failed at /127.0.0.1
    [junit]     at 
org.apache.hadoop.dfs.DataNode.receiveResponse(DataNode.java:725)
    [junit]     at org.apache.hadoop.dfs.DataNode.access$200(DataNode.java:80)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataTransfer.run(DataNode.java:1664)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 344.356 sec
    [junit] 2007-11-09 08:20:52,864 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:995): Error 
writing reply back to /127.0.0.1 for writing block blk_-3764842785131980349
    [junit] java.net.SocketException: Broken pipe
    [junit]     at java.net.SocketOutputStream.socketWrite0(Native Method)
    [junit]     at 
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
    [junit]     at 
java.net.SocketOutputStream.write(SocketOutputStream.java:115)
    [junit]     at 
java.io.DataOutputStream.writeShort(DataOutputStream.java:151)
    [junit]     at 
org.apache.hadoop.dfs.DataNode.sendResponse(DataNode.java:737)
    [junit]     at org.apache.hadoop.dfs.DataNode.access$300(DataNode.java:80)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:993)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:824)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] Exception! java.io.IOException: No such file or directory
    [junit] 2007-11-09 08:20:53,748 ERROR [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:839): DataXceiver: 
java.io.IOException: No such file or directory
    [junit]     at java.io.UnixFileSystem.createFileExclusively(Native Method)
    [junit]     at java.io.File.createNewFile(File.java:850)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.createTmpFile(FSDataset.java:329)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.createTmpFile(FSDataset.java:606)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:582)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1458)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:929)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:824)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] 2007-11-09 08:20:53,748 ERROR [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:839): DataXceiver: 
java.io.IOException: du: 
/export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data:
 No such file or directory
    [junit]     at 
org.apache.hadoop.fs.ShellCommand.runCommand(ShellCommand.java:52)
    [junit]     at org.apache.hadoop.fs.ShellCommand.run(ShellCommand.java:42)
    [junit]     at org.apache.hadoop.fs.DU.getUsed(DU.java:52)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.getDfsUsed(FSDataset.java:299)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.getAvailable(FSDataset.java:307)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getNextVolume(FSDataset.java:386)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:580)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1458)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:929)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:824)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] 2007-11-09 08:20:53,749 ERROR [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:839): DataXceiver: 
java.io.IOException: No such file or directory
    [junit]     at java.io.UnixFileSystem.createFileExclusively(Native Method)
    [junit]     at java.io.File.createNewFile(File.java:850)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.createTmpFile(FSDataset.java:329)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.createTmpFile(FSDataset.java:606)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:582)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1458)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:929)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:824)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] Exception! java.io.IOException: No such file or directory
    [junit] 2007-11-09 08:20:53,800 ERROR [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:839): DataXceiver: 
java.io.IOException: No such file or directory
    [junit]     at java.io.UnixFileSystem.createFileExclusively(Native Method)
    [junit]     at java.io.File.createNewFile(File.java:850)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset$FSVolume.createTmpFile(FSDataset.java:329)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.createTmpFile(FSDataset.java:606)
    [junit]     at 
org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:582)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1458)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:929)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:824)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] 2007-11-09 08:20:53,801 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.DataNode$DataTransfer.run(DataNode.java:1668): Failed to 
transfer blk_1352514030345870875 to 127.0.0.1:50011 got java.io.IOException: 
operation failed at /127.0.0.1
    [junit]     at 
org.apache.hadoop.dfs.DataNode.receiveResponse(DataNode.java:725)
    [junit]     at org.apache.hadoop.dfs.DataNode.access$200(DataNode.java:80)
    [junit]     at 
org.apache.hadoop.dfs.DataNode$DataTransfer.run(DataNode.java:1664)
    [junit]     at java.lang.Thread.run(Thread.java:595)
{code}

Test had not reported itself done.  Are these du'ings and unix process 
invocations of interest? (Check).

> [hbase] TestHStoreFile/TestBloomFilter hang occasionally on hudson AFTER test 
> has finished
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-2040
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2040
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>            Priority: Minor
>         Attachments: endoftesttd.patch
>
>
> Weird.  Last night TestBloomFilter was hung after junit had printed test had 
> completed without error.  Just now, I noticed a hung TestHStore -- again 
> after junit had printed out test had succeeded (Nigel Daley has reported he's 
> seen at least two hangs in TestHStoreFile, perhaps in same location).
> Last night and just now I was unable to get a thread dump.
> Here is log from around this evenings hang:
> {code}
> ...
>     [junit] 2007-10-12 04:19:28,477 INFO  [main] 
> org.apache.hadoop.hbase.TestHStoreFile.testOutOfRangeMidkeyHalfMapFile(TestHStoreFile.java:366):
>  Last bottom when key > top: zz/zz/1192162768317
>     [junit] 2007-10-12 04:19:28,493 WARN  [IPC Server handler 0 on 36620] 
> org.apache.hadoop.dfs.FSDirectory.unprotectedDelete(FSDirectory.java:400): 
> DIR* FSDirectory.unprotectedDelete: failed to remove 
> /testOutOfRangeMidkeyHalfMapFile because it does not exist
>     [junit] Shutting down the Mini HDFS Cluster
>     [junit] Shutting down DataNode 1
>     [junit] Shutting down DataNode 0
>     [junit] 2007-10-12 04:19:29,316 WARN  [EMAIL PROTECTED] 
> org.apache.hadoop.dfs.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:186):
>  PendingReplicationMonitor thread received exception. 
> java.lang.InterruptedException: sleep interrupted
>     [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 16.274 sec
>     [junit] Running org.apache.hadoop.hbase.TestHTable
>     [junit] Starting DataNode 0 with dfs.data.dir: 
> /export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data2
>     [junit] Starting DataNode 1 with dfs.data.dir: 
> /export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Patch/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data4
>     [junit] 2007-10-12 05:21:48,332 INFO  [main] 
> org.apache.hadoop.hbase.HMaster.<init>(HMaster.java:862): Root region dir: 
> /hbase/hregion_-ROOT-,,0
> ...
> {code}
> Notice the hour of elapsed (hung) time in above.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to