I ran into a test failure on my local machine that went away by re-running it. It wasn't a failure, but a time out of some sort.
On Sun, Jun 14, 2009 at 10:48 PM, Apache Hudson Server < [email protected]> wrote: > See http://hudson.zones.apache.org/hudson/job/HBase-Patch/638/changes > > Changes: > > [rawson] HBASE-1500, 1513, 1516, 1520 > > ------------------------------------------ > [...truncated 21462 lines...] > [junit] 2009-06-15 06:09:48,042 INFO [main] regionserver.HRegion(485): > Closed testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,044 DEBUG [main] regionserver.HRegion(264): > Opening region testscanner,,1245046187868, encoded=1220048645 > [junit] 2009-06-15 06:09:48,056 DEBUG [main] regionserver.Store(379): > loaded /user/hudson/testscanner/1220048645/info/8185575086316133138, > isReference=false, sequence id=2, length=708, majorCompaction=false > [junit] 2009-06-15 06:09:48,060 INFO [main] regionserver.HRegion(339): > region testscanner,,1245046187868/1220048645 available; sequence id is 3 > [junit] 2009-06-15 06:09:48,228 DEBUG [main] regionserver.HRegion(884): > Started memcache flush for region testscanner,,1245046187868. Current region > memcache size 189.0 > [junit] 2009-06-15 06:09:48,301 DEBUG [main] regionserver.Store(525): > Added > hdfs://localhost:45621/user/hudson/testscanner/1220048645/info/5612591967624250665, > entries=1, sequenceid=4, memsize=189.0, filesize=441.0 to > testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,301 DEBUG [main] regionserver.HRegion(961): > Finished memcache flush of ~189.0 for region testscanner,,1245046187868 in > 73ms, sequence id=4, compaction requested=false > [junit] 2009-06-15 06:09:48,308 DEBUG [main] regionserver.HRegion(436): > Closing testscanner,,1245046187868: compactions & flushes disabled > [junit] 2009-06-15 06:09:48,308 DEBUG [main] regionserver.HRegion(466): > Updates disabled for region, no outstanding scanners on > testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,309 DEBUG [main] regionserver.HRegion(473): > No more row locks outstanding on region testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,309 DEBUG [main] regionserver.Store(444): > closed info > [junit] 2009-06-15 06:09:48,309 INFO [main] regionserver.HRegion(485): > Closed testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,310 DEBUG [main] regionserver.HRegion(264): > Opening region testscanner,,1245046187868, encoded=1220048645 > [junit] 2009-06-15 06:09:48,319 DEBUG [main] regionserver.Store(379): > loaded /user/hudson/testscanner/1220048645/info/5612591967624250665, > isReference=false, sequence id=4, length=441, majorCompaction=false > [junit] 2009-06-15 06:09:48,324 DEBUG [main] regionserver.Store(379): > loaded /user/hudson/testscanner/1220048645/info/8185575086316133138, > isReference=false, sequence id=2, length=708, majorCompaction=false > [junit] 2009-06-15 06:09:48,327 INFO [main] regionserver.HRegion(339): > region testscanner,,1245046187868/1220048645 available; sequence id is 5 > [junit] 2009-06-15 06:09:48,430 DEBUG [main] regionserver.HRegion(884): > Started memcache flush for region testscanner,,1245046187868. Current region > memcache size 189.0 > [junit] 2009-06-15 06:09:48,504 DEBUG [main] regionserver.Store(525): > Added > hdfs://localhost:45621/user/hudson/testscanner/1220048645/info/199727335888540581, > entries=1, sequenceid=6, memsize=189.0, filesize=441.0 to > testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,504 DEBUG [main] regionserver.HRegion(961): > Finished memcache flush of ~189.0 for region testscanner,,1245046187868 in > 74ms, sequence id=6, compaction requested=true > [junit] 2009-06-15 06:09:48,507 DEBUG [main] regionserver.HRegion(436): > Closing testscanner,,1245046187868: compactions & flushes disabled > [junit] 2009-06-15 06:09:48,507 DEBUG [main] regionserver.HRegion(466): > Updates disabled for region, no outstanding scanners on > testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,507 DEBUG [main] regionserver.HRegion(473): > No more row locks outstanding on region testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,508 DEBUG [main] regionserver.Store(444): > closed info > [junit] 2009-06-15 06:09:48,508 INFO [main] regionserver.HRegion(485): > Closed testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,508 DEBUG [main] regionserver.HRegion(264): > Opening region testscanner,,1245046187868, encoded=1220048645 > [junit] 2009-06-15 06:09:48,539 DEBUG [main] regionserver.Store(379): > loaded /user/hudson/testscanner/1220048645/info/199727335888540581, > isReference=false, sequence id=6, length=441, majorCompaction=false > [junit] 2009-06-15 06:09:48,543 DEBUG [main] regionserver.Store(379): > loaded /user/hudson/testscanner/1220048645/info/5612591967624250665, > isReference=false, sequence id=4, length=441, majorCompaction=false > [junit] 2009-06-15 06:09:48,565 DEBUG [main] regionserver.Store(379): > loaded /user/hudson/testscanner/1220048645/info/8185575086316133138, > isReference=false, sequence id=2, length=708, majorCompaction=false > [junit] 2009-06-15 06:09:48,568 INFO [main] regionserver.HRegion(339): > region testscanner,,1245046187868/1220048645 available; sequence id is 7 > [junit] 2009-06-15 06:09:48,569 DEBUG [main] regionserver.HRegion(436): > Closing testscanner,,1245046187868: compactions & flushes disabled > [junit] 2009-06-15 06:09:48,570 DEBUG [main] regionserver.HRegion(466): > Updates disabled for region, no outstanding scanners on > testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,570 DEBUG [main] regionserver.HRegion(473): > No more row locks outstanding on region testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,570 DEBUG [main] regionserver.Store(444): > closed info > [junit] 2009-06-15 06:09:48,571 INFO [main] regionserver.HRegion(485): > Closed testscanner,,1245046187868 > [junit] 2009-06-15 06:09:48,571 DEBUG [main] regionserver.HLog(456): > closing hlog writer in > hdfs://localhost:45621/user/hudson/testscanner/1220048645/.logs > [junit] 2009-06-15 06:09:48,595 INFO [main] hbase.HBaseTestCase(596): > Shutting down FileSystem > [junit] 2009-06-15 06:09:48,596 INFO [main] hbase.HBaseTestCase(603): > Shutting down Mini DFS > [junit] Shutting down the Mini HDFS Cluster > [junit] Shutting down DataNode 1 > [junit] 2009-06-15 06:09:48,699 WARN > [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@189c12a] > datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:35834, > storageID=DS-99952467-67.195.138.9-35834-1245046187735, infoPort=39088, > ipcPort=54938):DataXceiveServer: > java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) > [junit] at java.lang.Thread.run(Thread.java:619) > [junit] > [junit] Shutting down DataNode 0 > [junit] 2009-06-15 06:09:49,808 WARN > [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@ccd21c] > datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:55971, > storageID=DS-1461617184-67.195.138.9-55971-1245046187056, infoPort=49203, > ipcPort=37555):DataXceiveServer: > java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) > [junit] at java.lang.Thread.run(Thread.java:619) > [junit] > [junit] 2009-06-15 06:09:50,910 WARN > > [org.apache.hadoop.hdfs.server.namenode.fsnamesystem$replicationmoni...@9c22ff] > namenode.FSNamesystem$ReplicationMonitor(2306): ReplicationMonitor thread > received InterruptedException.java.lang.InterruptedException: sleep > interrupted > [junit] 2009-06-15 06:09:51.769::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:09:51.775::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/hdfs<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/hdfs> > to /tmp/Jetty_localhost_55306_hdfs____.51dfgd/webapp > [junit] 2009-06-15 06:09:51.960::INFO: Started > selectchannelconnec...@localhost:55306 > [junit] Starting DataNode 0 with dfs.data.dir: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data1,/home/hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/data2 > [junit] 2009-06-15 06:09:52.518::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:09:52.524::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/datanode<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/datanode> > to /tmp/Jetty_localhost_51950_datanode____.3fidnl/webapp > [junit] 2009-06-15 06:09:52.695::INFO: Started > selectchannelconnec...@localhost:51950 > [junit] Starting DataNode 1 with dfs.data.dir: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/data4 > [junit] 2009-06-15 06:09:53.227::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:09:53.232::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/datanode<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/datanode> > to /tmp/Jetty_localhost_36812_datanode____.kkl0qr/webapp > [junit] 2009-06-15 06:09:53.396::INFO: Started > selectchannelconnec...@localhost:36812 > [junit] 2009-06-15 06:09:53,551 INFO [main] regionserver.HLog(209): > HLog configuration: blocksize=67108864, rollsize=63753420, enabled=true, > flushlogentries=100, optionallogflushinternal=10000ms > [junit] 2009-06-15 06:09:53,561 INFO [main] regionserver.HLog(299): New > hlog /user/hudson/testscanner/1609542216/.logs/hlog.dat.1245046193551 > [junit] 2009-06-15 06:09:53,562 DEBUG [main] regionserver.HRegion(264): > Opening region testscanner,,1245046193528, encoded=1609542216 > [junit] 2009-06-15 06:09:53,594 INFO [main] regionserver.HRegion(339): > region testscanner,,1245046193528/1609542216 available; sequence id is 0 > [junit] 2009-06-15 06:09:54,102 INFO [main] > regionserver.TestScanner(373): Added: 17576 > [junit] 2009-06-15 06:09:54,102 INFO [main] > regionserver.TestScanner(396): Taking out counting scan > [junit] 2009-06-15 06:09:54,863 INFO [main] > regionserver.TestScanner(410): Found 0 items > [junit] 2009-06-15 06:09:54,864 INFO [main] > regionserver.TestScanner(396): Taking out counting scan > [junit] 2009-06-15 06:09:55,372 INFO [main] > regionserver.TestScanner(410): Found 0 items > [junit] 2009-06-15 06:09:55,373 INFO [main] > regionserver.TestScanner(396): Taking out counting scan > [junit] 2009-06-15 06:09:55,776 INFO [main] > regionserver.TestScanner(410): Found 0 items > [junit] 2009-06-15 06:09:55,776 INFO [main] > regionserver.TestScanner(396): Taking out counting scan > [junit] 2009-06-15 06:09:56,171 INFO [main] > regionserver.TestScanner(410): Found 0 items > [junit] 2009-06-15 06:09:56,172 DEBUG [main] regionserver.HRegion(436): > Closing testscanner,,1245046193528: compactions & flushes disabled > [junit] 2009-06-15 06:09:56,172 DEBUG [main] regionserver.HRegion(466): > Updates disabled for region, no outstanding scanners on > testscanner,,1245046193528 > [junit] 2009-06-15 06:09:56,172 DEBUG [main] regionserver.HRegion(473): > No more row locks outstanding on region testscanner,,1245046193528 > [junit] 2009-06-15 06:09:56,172 DEBUG [main] regionserver.HRegion(884): > Started memcache flush for region testscanner,,1245046193528. Current region > memcache size 2.6m > [junit] 2009-06-15 06:09:56,299 DEBUG [main] regionserver.Store(525): > Added > hdfs://localhost:51959/user/hudson/testscanner/1609542216/info/6515138147530284761, > entries=17576, sequenceid=17577, memsize=2.6m, filesize=708.4k to > testscanner,,1245046193528 > [junit] 2009-06-15 06:09:56,299 DEBUG [main] regionserver.HRegion(961): > Finished memcache flush of ~2.6m for region testscanner,,1245046193528 in > 127ms, sequence id=17577, compaction requested=false > [junit] 2009-06-15 06:09:56,300 DEBUG [main] regionserver.Store(444): > closed info > [junit] 2009-06-15 06:09:56,300 INFO [main] regionserver.HRegion(485): > Closed testscanner,,1245046193528 > [junit] 2009-06-15 06:09:56,300 DEBUG [main] regionserver.HLog(456): > closing hlog writer in > hdfs://localhost:51959/user/hudson/testscanner/1609542216/.logs > [junit] 2009-06-15 06:09:56,318 INFO [main] hbase.HBaseTestCase(596): > Shutting down FileSystem > [junit] 2009-06-15 06:09:56,318 INFO [main] hbase.HBaseTestCase(603): > Shutting down Mini DFS > [junit] Shutting down the Mini HDFS Cluster > [junit] Shutting down DataNode 1 > [junit] 2009-06-15 06:09:56,497 WARN > [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@20ca8b] > datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:34120, > storageID=DS-254773294-67.195.138.9-34120-1245046193399, infoPort=36812, > ipcPort=46062):DataXceiveServer: > java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) > [junit] at java.lang.Thread.run(Thread.java:619) > [junit] > [junit] Shutting down DataNode 0 > [junit] 2009-06-15 06:09:57,600 WARN > [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@52fecf] > datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:38151, > storageID=DS-1861471881-67.195.138.9-38151-1245046192697, infoPort=51950, > ipcPort=37994):DataXceiveServer: > java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) > [junit] at java.lang.Thread.run(Thread.java:619) > [junit] > [junit] 2009-06-15 06:09:57,802 WARN > > [org.apache.hadoop.hdfs.server.namenode.fsnamesystem$replicationmoni...@b27c38] > namenode.FSNamesystem$ReplicationMonitor(2306): ReplicationMonitor thread > received InterruptedException.java.lang.InterruptedException: sleep > interrupted > [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 42.945 sec > [junit] Running org.apache.hadoop.hbase.regionserver.TestStore > [junit] 2009-06-15 06:09:58,642 DEBUG [main] regionserver.Store(525): > Added > test/build/data/TestStore/testGet_FromFilesOnly/2107506526/family/2876816332533470056, > entries=2, sequenceid=1245046198122, memsize=298.0, filesize=385.0 to > table,,1245046198582 > [junit] 2009-06-15 06:09:58,667 DEBUG [main] regionserver.Store(525): > Added > test/build/data/TestStore/testGet_FromFilesOnly/2107506526/family/2542220914680911137, > entries=2, sequenceid=1245046198123, memsize=298.0, filesize=385.0 to > table,,1245046198582 > [junit] 2009-06-15 06:09:58,681 DEBUG [main] regionserver.Store(525): > Added > test/build/data/TestStore/testGet_FromFilesOnly/2107506526/family/2918797153280067773, > entries=2, sequenceid=1245046198124, memsize=298.0, filesize=385.0 to > table,,1245046198582 > [junit] 2009-06-15 06:09:58,842 DEBUG [main] regionserver.Store(525): > Added > test/build/data/TestStore/testGet_FromMemCacheAndFiles/1124801513/family/6402592287883871163, > entries=2, sequenceid=1245046198122, memsize=298.0, filesize=385.0 to > table,,1245046198828 > [junit] 2009-06-15 06:09:58,856 DEBUG [main] regionserver.Store(525): > Added > test/build/data/TestStore/testGet_FromMemCacheAndFiles/1124801513/family/8075117793770126781, > entries=2, sequenceid=1245046198123, memsize=298.0, filesize=385.0 to > table,,1245046198828 > [junit] 2009-06-15 06:09:59,025 DEBUG [main] regionserver.Store(525): > Added > test/build/data/TestStore/testIncrementColumnValue_UpdatingFromSF/1260139731/family/9061870989644778679, > entries=2, sequenceid=1245046198123, memsize=314.0, filesize=401.0 to > table,,1245046198998 > [junit] 2009-06-15 06:09:59,077 DEBUG [main] regionserver.Store(525): > Added > test/build/data/TestStore/testIncrementColumnValue_AddingNewAfterSFCheck/1806621341/family/7879251846251460387, > entries=2, sequenceid=1245046198123, memsize=314.0, filesize=401.0 to > table,,1245046199060 > [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.956 sec > [junit] Running org.apache.hadoop.hbase.regionserver.TestStoreFile > [junit] 2009-06-15 06:10:00.734::INFO: Logging to STDERR via > org.mortbay.log.StdErrLog > [junit] 2009-06-15 06:10:00.782::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:10:00.812::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/hdfs<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/hdfs> > to /tmp/Jetty_localhost_59549_hdfs____.gjsao0/webapp > [junit] 2009-06-15 06:10:33.985::INFO: Started > selectchannelconnec...@localhost:59549 > [junit] Starting DataNode 0 with dfs.data.dir: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data1,/home/hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/data2 > [junit] 2009-06-15 06:10:34.635::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:10:34.642::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/datanode<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/datanode> > to /tmp/Jetty_localhost_53051_datanode____ykbjv9/webapp > [junit] 2009-06-15 06:10:34.872::INFO: Started > selectchannelconnec...@localhost:53051 > [junit] Starting DataNode 1 with dfs.data.dir: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/data4 > [junit] 2009-06-15 06:10:35.438::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:10:35.445::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/datanode<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/datanode> > to /tmp/Jetty_localhost_48003_datanode____22ex7a/webapp > [junit] 2009-06-15 06:10:35.661::INFO: Started > selectchannelconnec...@localhost:48003 > [junit] 2009-06-15 06:10:36,288 INFO [main] > regionserver.TestStoreFile(164): Midkey: mi > testBasicHalfMapFiletestBasicHalfMapFile !?? > [junit] 2009-06-15 06:10:36,298 INFO [main] > regionserver.TestStoreFile(183): First in top: aa > testBasicHalfMapFiletestBasicHalfMapFile !?? > [junit] 2009-06-15 06:10:36,306 INFO [main] > regionserver.TestStoreFile(186): Last in top: zz > testBasicHalfMapFiletestBasicHalfMapFile !?? > [junit] 2009-06-15 06:10:36,579 INFO [main] > regionserver.TestStoreFile(237): First top when key < bottom: > /aa/1473914524603146611 > [junit] 2009-06-15 06:10:36,585 INFO [main] > regionserver.TestStoreFile(245): Last top when key < bottom: > /zz/1473914524603146611 > [junit] 2009-06-15 06:10:36,854 INFO [main] > regionserver.TestStoreFile(269): First bottom when key > top: > /aa/1473914524603146611 > [junit] 2009-06-15 06:10:36,861 INFO [main] > regionserver.TestStoreFile(277): Last bottom when key > top: > /zz/1473914524603146611 > [junit] 2009-06-15 06:10:36,872 INFO [main] hbase.HBaseTestCase(596): > Shutting down FileSystem > [junit] 2009-06-15 06:10:36,872 INFO [main] hbase.HBaseTestCase(603): > Shutting down Mini DFS > [junit] Shutting down the Mini HDFS Cluster > [junit] Shutting down DataNode 1 > [junit] 2009-06-15 06:10:36,976 WARN > [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@160bf50] > datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:55885, > storageID=DS-1885144517-67.195.138.9-55885-1245046235666, infoPort=48003, > ipcPort=38933):DataXceiveServer: > java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) > [junit] at java.lang.Thread.run(Thread.java:619) > [junit] > [junit] Shutting down DataNode 0 > [junit] 2009-06-15 06:10:38,081 WARN > [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@20f237] > datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:40057, > storageID=DS-1946911837-67.195.138.9-40057-1245046234878, infoPort=53051, > ipcPort=42033):DataXceiveServer: > java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) > [junit] at java.lang.Thread.run(Thread.java:619) > [junit] > [junit] 2009-06-15 06:10:38,285 WARN > > [org.apache.hadoop.hdfs.server.namenode.fsnamesystem$replicationmoni...@d56b37] > namenode.FSNamesystem$ReplicationMonitor(2306): ReplicationMonitor thread > received InterruptedException.java.lang.InterruptedException: sleep > interrupted > [junit] 2009-06-15 06:10:39.214::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:10:39.221::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/hdfs<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/hdfs> > to /tmp/Jetty_localhost_60438_hdfs____m8tqh5/webapp > [junit] 2009-06-15 06:10:39.410::INFO: Started > selectchannelconnec...@localhost:60438 > [junit] Starting DataNode 0 with dfs.data.dir: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data1,/home/hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/data2 > [junit] 2009-06-15 06:10:39.886::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:10:39.892::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/datanode<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/datanode> > to /tmp/Jetty_localhost_37043_datanode____pprvaq/webapp > [junit] 2009-06-15 06:10:40.071::INFO: Started > selectchannelconnec...@localhost:37043 > [junit] Starting DataNode 1 with dfs.data.dir: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/data4 > [junit] 2009-06-15 06:10:40.634::INFO: jetty-6.1.14 > [junit] 2009-06-15 06:10:40.641::INFO: Extract jar: > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar!/webapps/datanode<http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core.jar%21/webapps/datanode> > to /tmp/Jetty_localhost_52124_datanode____7fnrz1/webapp > [junit] 2009-06-15 06:10:40.816::INFO: Started > selectchannelconnec...@localhost:52124 > [junit] 2009-06-15 06:10:41,128 INFO [main] hbase.HBaseTestCase(596): > Shutting down FileSystem > [junit] 2009-06-15 06:10:41,129 INFO [main] hbase.HBaseTestCase(603): > Shutting down Mini DFS > [junit] Shutting down the Mini HDFS Cluster > [junit] Shutting down DataNode 1 > [junit] 2009-06-15 06:10:41,232 WARN > [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@95c5ec] > datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:33980, > storageID=DS-1363618188-67.195.138.9-33980-1245046240819, infoPort=52124, > ipcPort=37881):DataXceiveServer: > java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) > [junit] at java.lang.Thread.run(Thread.java:619) > [junit] > [junit] Shutting down DataNode 0 > [junit] 2009-06-15 06:10:42,336 WARN > [org.apache.hadoop.hdfs.server.datanode.dataxceiverser...@35dc95] > datanode.DataXceiverServer(137): DatanodeRegistration(127.0.0.1:48448, > storageID=DS-1932921098-67.195.138.9-48448-1245046240074, infoPort=37043, > ipcPort=52056):DataXceiveServer: > java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130) > [junit] at java.lang.Thread.run(Thread.java:619) > [junit] > [junit] 2009-06-15 06:10:43,438 WARN > > [org.apache.hadoop.hdfs.server.namenode.fsnamesystem$replicationmoni...@6cc2a4] > namenode.FSNamesystem$ReplicationMonitor(2306): ReplicationMonitor thread > received InterruptedException.java.lang.InterruptedException: sleep > interrupted > [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 43.956 sec > [junit] Running org.apache.hadoop.hbase.regionserver.TestStoreScanner > [junit] Tests run: 14, Failures: 0, Errors: 0, Time elapsed: 0.073 sec > [junit] Running > org.apache.hadoop.hbase.regionserver.TestWildcardColumnTracker > [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.32 sec > [junit] Running org.apache.hadoop.hbase.util.TestBase64 > [junit] > [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.133 sec > [junit] Running org.apache.hadoop.hbase.util.TestBytes > [junit] AAA > [junit] CCC > [junit] EEE > [junit] AAA > [junit] BBB > [junit] CCC > [junit] DDD > [junit] http://A > [junit] http://] > [junit] http://z > [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.082 sec > [junit] Running org.apache.hadoop.hbase.util.TestKeying > [junit] Original url > http://abc:[email protected]/index.html?query=something#middle, > Transformed url r:http://abc:[email protected] > /index.html?query=something#middle > [junit] Original url file:///usr/bin/java, Transformed url > file:///usr/bin/java > [junit] Original url dns:www.powerset.com, Transformed url dns: > www.powerset.com > [junit] Original url dns://dns.powerset.com/www.powerset.com, > Transformed url r:dns://com.powerset.dns/www.powerset.com > [junit] Original url http://one.two.three/index.html, Transformed url > r:http://three.two.one/index.html > [junit] Original url https://one.two.three:9443/index.html, Transformed > url r:https://three.two.one:9443/index.html > [junit] Original url ftp://one.two.three/index.html, Transformed url r: > ftp://three.two.one/index.html > [junit] Original url filename, Transformed url filename > [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.079 sec > [junit] Running org.apache.hadoop.hbase.util.TestRootPath > [junit] 2009-06-15 06:10:46,064 INFO [main] util.TestRootPath(60): Got > expected exception when checking invalid path: > [junit] java.io.IOException: Root directory does not contain a scheme > [junit] at > org.apache.hadoop.hbase.util.FSUtils.validateRootPath(FSUtils.java:212) > [junit] at > org.apache.hadoop.hbase.util.TestRootPath.testRootPath(TestRootPath.java:56) > [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > [junit] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > [junit] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > [junit] at java.lang.reflect.Method.invoke(Method.java:597) > [junit] at junit.framework.TestCase.runTest(TestCase.java:154) > [junit] at junit.framework.TestCase.runBare(TestCase.java:127) > [junit] at junit.framework.TestResult$1.protect(TestResult.java:106) > [junit] at > junit.framework.TestResult.runProtected(TestResult.java:124) > [junit] at junit.framework.TestResult.run(TestResult.java:109) > [junit] at junit.framework.TestCase.run(TestCase.java:118) > [junit] at junit.framework.TestSuite.runTest(TestSuite.java:208) > [junit] at junit.framework.TestSuite.run(TestSuite.java:203) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:421) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:912) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:766) > [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.077 sec > > BUILD FAILED > http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/build.xml:460: > Tests failed! > > Total time: 32 minutes 15 seconds > [locks-and-latches] Releasing all the locks > [locks-and-latches] All the locks released > Recording test results > Publishing Clover coverage report... > >
