See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/283/changes

Changes:

[stack] HADOOP-2088 Make hbase runnable in $HADOOP_HOME/build(/contrib/hbase)

[acmurthy] Moved the change-log for HADOOP-1857 to the 'NEW FEATURES' section 
from the 'IMPROVEMENTS' section.

[acmurthy] HADOOP-2103.  Fix minor javadoc bugs introduce by HADOOP-2046. 
Contributed by Nigel Daley.

[ddas] HADOOP-1857.  Ability to run a script when a task fails to capture stack 
traces. Contributed by Amareshwari Sri Ramadasu.

[ddas] HADOOP-1210.  Log counters in job history. Contributed by Owen O'Malley

[ddas] HADOOP-1857.  Ability to run a script when a task fails to capture stack 
traces. Contributed by Amareshwari Sri Ramadasu

[cutting] HADOOP-1622.  Permit multiple jars to be added to a job.  Contributed 
by Dennis Kubes.

[taton] HADOOP-1848 Major rewrite of the Eclipse plug-in. The new design lets 
the plug-in use the RPC interface to the Hadoop DFS and Map/Reduce instead of 
relying on shell command-line tools. This also include support for SOCKS proxy 
access to a DFS and to a Map/Reduce tracker. (taton)

[acmurthy] HADOOP-2098.  Log start & completion of empty jobs to JobHistory, 
which also ensures that we close the file-descriptor of the job's history log 
opened during job-submission. Contributed by Amar Kamat.

[acmurthy] HADOOP-2096.  Close open file-descriptors held by streams while 
localizing job.xml in the JobTracker and while displaying it on the webui in 
jobconf.jsp. Contributed by Amar Kamat.

[acmurthy] HADOOP-1642.  Ensure jobids generated by LocalJobRunner are unique 
to avoid collissions and hence job-failures. Contributed by Doug Cutting.

[acmurthy] HADOOP-2100.  Remove faulty check for existence of $HADOOP_PID_DIR 
and let 'mkdir -p' to check & create it. Contributed by Michael Bieniosek.

------------------------------------------
[...truncated 42752 lines...]
    [junit] 2007-10-26 19:39:56,401 INFO  [HMaster] 
org.apache.hadoop.hbase.HMaster.run(HMaster.java:1151): HMaster main thread 
exiting
    [junit] 2007-10-26 19:39:56,402 INFO  [main] 
org.apache.hadoop.hbase.LocalHBaseCluster.shutdown(LocalHBaseCluster.java:225): 
Shutdown HMaster 1 region server(s)
    [junit] 2007-10-26 19:39:56,403 INFO  [main] 
org.apache.hadoop.hbase.MiniHBaseCluster.shutdown(MiniHBaseCluster.java:233): 
Shutting down Mini DFS 
    [junit] 2007-10-26 19:39:57,147 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:186):
 PendingReplicationMonitor thread received exception. 
java.lang.InterruptedException: sleep interrupted
    [junit] 2007-10-26 19:39:57,508 INFO  [main] 
org.apache.hadoop.hbase.MiniHBaseCluster.shutdown(MiniHBaseCluster.java:237): 
Shutting down FileSystem
    [junit] 2007-10-26 19:40:02,558 INFO  [main] 
org.apache.hadoop.hbase.HMaster.<init>(HMaster.java:883): Root region dir: 
/hbase/hregion_-1211131136111139257870-26-18587217-11051-1887-41-2
    [junit] 2007-10-26 19:40:02,572 INFO  [main] 
org.apache.hadoop.hbase.HMaster.<init>(HMaster.java:892): bootstrap: creating 
ROOT and first META regions
    [junit] 2007-10-26 19:40:02,775 INFO  [main] 
org.apache.hadoop.hbase.HLog.rollWriter(HLog.java:298): new log writer created 
at 
/hbase/hregion_-1211131136111139257870-26-18587217-11051-1887-41-2/log/hlog.dat.000
    [junit] 2007-10-26 19:40:02,814 DEBUG [main] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:183): starting 
-1211131136111139257870-26-18587217-11051-1887-41-2/info (no reconstruction log)
    [junit] 2007-10-26 19:40:02,816 DEBUG [main] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:219): maximum sequence id for 
hstore -1211131136111139257870-26-18587217-11051-1887-41-2/info is -1
    [junit] 2007-10-26 19:40:02,818 DEBUG [main] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:287): Next sequence id for 
region -ROOT-,,0 is 0
    [junit] 2007-10-26 19:40:02,820 INFO  [main] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:313): region -ROOT-,,0 
available
    [junit] 2007-10-26 19:40:02,859 INFO  [main] 
org.apache.hadoop.hbase.HLog.rollWriter(HLog.java:298): new log writer created 
at 
/hbase/hregion_-4998637-20-92-37102-4963-105-387449-87-58-9095-7951/log/hlog.dat.000
    [junit] 2007-10-26 19:40:02,877 DEBUG [main] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:183): starting 
-4998637-20-92-37102-4963-105-387449-87-58-9095-7951/info (no reconstruction 
log)
    [junit] 2007-10-26 19:40:02,880 DEBUG [main] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:219): maximum sequence id for 
hstore -4998637-20-92-37102-4963-105-387449-87-58-9095-7951/info is -1
    [junit] 2007-10-26 19:40:02,888 DEBUG [main] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:287): Next sequence id for 
region .META.,,1 is 0
    [junit] 2007-10-26 19:40:02,891 INFO  [main] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:313): region .META.,,1 
available
    [junit] 2007-10-26 19:40:02,894 DEBUG [main] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:845): Started 
memcache flush for region -ROOT-,,0. Size 86.0
    [junit] 2007-10-26 19:40:02,894 DEBUG [main] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:874): 
Snapshotted memcache for region -ROOT-,,0 with sequence id 1 and entries 1
    [junit] 2007-10-26 19:40:02,991 DEBUG [main] 
org.apache.hadoop.hbase.HStore.flushCacheHelper(HStore.java:505): Added 
-1211131136111139257870-26-18587217-11051-1887-41-2/info/1047543257170273065 
with sequence id 1 and size 210.0
    [junit] 2007-10-26 19:40:02,992 DEBUG [main] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:917): Finished 
memcache flush for region -ROOT-,,0 in 99ms
    [junit] 2007-10-26 19:40:02,993 DEBUG [main] 
org.apache.hadoop.hbase.HStore.close(HStore.java:420): closed 
-1211131136111139257870-26-18587217-11051-1887-41-2/info
    [junit] 2007-10-26 19:40:02,994 INFO  [main] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:400): closed -ROOT-,,0
    [junit] 2007-10-26 19:40:02,995 DEBUG [main] 
org.apache.hadoop.hbase.HLog.close(HLog.java:382): closing log writer in 
/hbase/hregion_-1211131136111139257870-26-18587217-11051-1887-41-2/log
    [junit] 2007-10-26 19:40:03,024 DEBUG [main] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:845): Started 
memcache flush for region .META.,,1. Size 0.0
    [junit] 2007-10-26 19:40:03,026 DEBUG [main] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:863): Finished 
memcache flush; empty snapshot
    [junit] 2007-10-26 19:40:03,027 DEBUG [main] 
org.apache.hadoop.hbase.HStore.close(HStore.java:420): closed 
-4998637-20-92-37102-4963-105-387449-87-58-9095-7951/info
    [junit] 2007-10-26 19:40:03,028 INFO  [main] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:400): closed .META.,,1
    [junit] 2007-10-26 19:40:03,029 DEBUG [main] 
org.apache.hadoop.hbase.HLog.close(HLog.java:382): closing log writer in 
/hbase/hregion_-4998637-20-92-37102-4963-105-387449-87-58-9095-7951/log
    [junit] 2007-10-26 19:40:03,144 INFO  [main] 
org.apache.hadoop.hbase.HMaster.<init>(HMaster.java:972): HMaster initialized 
on 127.0.0.1:60000
    [junit] 2007-10-26 19:40:03,153 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.reportForDuty(HRegionServer.java:777): 
Telling master we are up
    [junit] 2007-10-26 19:40:03,160 INFO  [IPC Server handler 2 on 60000] 
org.apache.hadoop.hbase.HMaster.regionServerStartup(HMaster.java:1233): 
received start message from: 140.211.11.75:58444
    [junit] 2007-10-26 19:40:03,163 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.reportForDuty(HRegionServer.java:793): 
Done telling master we are up
    [junit] 2007-10-26 19:40:03,164 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.init(HRegionServer.java:628): Config from 
master: fs.default.name=localhost:58422
    [junit] 2007-10-26 19:40:03,165 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.init(HRegionServer.java:628): Config from 
master: hbase.rootdir=/hbase
    [junit] 2007-10-26 19:40:03,165 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.setupHLog(HRegionServer.java:648): Root 
dir: /hbase
    [junit] 2007-10-26 19:40:03,167 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.setupHLog(HRegionServer.java:653): Log 
dir /hbase/log_140.211.11.75_-5165221264789697286_58444
    [junit] 2007-10-26 19:40:03,164 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.runCommand(TestHBaseShell.java:161):
 Running command: create table testInsertSelectDelete (testInsertSelectDelete);
    [junit] 2007-10-26 19:40:03,170 DEBUG [main] 
org.apache.hadoop.hbase.shell.TableFormatterFactory.<init>(TableFormatterFactory.java:65):
 Table formatter class: 
org.apache.hadoop.hbase.shell.formatter.AsciiTableFormatter
    [junit] 2007-10-26 19:40:03,232 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HLog.rollWriter(HLog.java:298): new log writer created 
at /hbase/log_140.211.11.75_-5165221264789697286_58444/hlog.dat.000
    [junit] 2007-10-26 19:40:03,235 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.startServiceThreads(HRegionServer.java:710):
 HRegionServer started at: 140.211.11.75:58444
    [junit] 2007-10-26 19:40:03,237 INFO  [IPC Server handler 4 on 60000] 
org.apache.hadoop.hbase.HMaster.assignRegionsToOneServer(HMaster.java:1741): 
assigning region -ROOT-,,0 to the only server 140.211.11.75:58444
    [junit] 2007-10-26 19:40:03,239 INFO  [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegionServer$Worker.run(HRegionServer.java:875): 
MSG_REGION_OPEN : regionname: -ROOT-,,0, startKey: <>, tableDesc: {name: 
-ROOT-, families: {info:={name: info, max versions: 1, compression: NONE, in 
memory: false, max length: 2147483647, bloom filter: none}}}
    [junit] 2007-10-26 19:40:03,274 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:183): starting 
-1211131136111139257870-26-18587217-11051-1887-41-2/info (no reconstruction log)
    [junit] 2007-10-26 19:40:03,339 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:219): maximum sequence id for 
hstore -1211131136111139257870-26-18587217-11051-1887-41-2/info is 1
    [junit] 2007-10-26 19:40:03,465 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:287): Next sequence id for 
region -ROOT-,,0 is 2
    [junit] 2007-10-26 19:40:03,471 INFO  [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:313): region -ROOT-,,0 
available
    [junit] 2007-10-26 19:40:03,472 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HLog.setSequenceNumber(HLog.java:235): changing 
sequence number from 0 to 2
    [junit] 2007-10-26 19:40:04,285 INFO  [IPC Server handler 2 on 60000] 
org.apache.hadoop.hbase.HMaster.processMsgs(HMaster.java:1483): 
140.211.11.75:58444 serving -ROOT-,,0
    [junit] 2007-10-26 19:40:04,286 INFO  [HMaster.rootScanner] 
org.apache.hadoop.hbase.HMaster$BaseScanner.scanRegion(HMaster.java:209): 
HMaster.rootScanner scanning meta region regionname: -ROOT-,,0, startKey: <>, 
server: 140.211.11.75:58444}
    [junit] 2007-10-26 19:40:04,331 DEBUG [HMaster.rootScanner] 
org.apache.hadoop.hbase.HMaster$BaseScanner.scanRegion(HMaster.java:240): 
HMaster.rootScanner scanner: 103583847770539314 regioninfo: {regionname: 
.META.,,1, startKey: <>, tableDesc: {name: .META., families: {info:={name: 
info, max versions: 1, compression: NONE, in memory: false, max length: 
2147483647, bloom filter: none}}}}, server: , startCode: -1
    [junit] 2007-10-26 19:40:04,332 DEBUG [HMaster.rootScanner] 
org.apache.hadoop.hbase.HMaster$BaseScanner.checkAssigned(HMaster.java:444): 
Checking .META.,,1 is assigned
    [junit] 2007-10-26 19:40:04,332 DEBUG [HMaster.rootScanner] 
org.apache.hadoop.hbase.HMaster$BaseScanner.checkAssigned(HMaster.java:452): 
Current assignment of .META.,,1 is no good
    [junit] 2007-10-26 19:40:05,227 INFO  [HMaster.rootScanner] 
org.apache.hadoop.hbase.HMaster$BaseScanner.scanRegion(HMaster.java:286): 
HMaster.rootScanner scan of meta region regionname: -ROOT-,,0, startKey: <>, 
server: 140.211.11.75:58444} complete
    [junit] 2007-10-26 19:40:05,325 INFO  [IPC Server handler 1 on 60000] 
org.apache.hadoop.hbase.HMaster.assignRegionsToOneServer(HMaster.java:1741): 
assigning region .META.,,1 to the only server 140.211.11.75:58444
    [junit] 2007-10-26 19:40:05,536 INFO  [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegionServer$Worker.run(HRegionServer.java:875): 
MSG_REGION_OPEN : regionname: .META.,,1, startKey: <>, tableDesc: {name: 
.META., families: {info:={name: info, max versions: 1, compression: NONE, in 
memory: false, max length: 2147483647, bloom filter: none}}}
    [junit] 2007-10-26 19:40:05,573 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:183): starting 
-4998637-20-92-37102-4963-105-387449-87-58-9095-7951/info (no reconstruction 
log)
    [junit] 2007-10-26 19:40:05,576 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:219): maximum sequence id for 
hstore -4998637-20-92-37102-4963-105-387449-87-58-9095-7951/info is -1
    [junit] 2007-10-26 19:40:05,577 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:287): Next sequence id for 
region .META.,,1 is 0
    [junit] 2007-10-26 19:40:05,579 INFO  [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:313): region .META.,,1 
available
    [junit] 2007-10-26 19:40:06,546 INFO  [IPC Server handler 4 on 60000] 
org.apache.hadoop.hbase.HMaster.processMsgs(HMaster.java:1483): 
140.211.11.75:58444 serving .META.,,1
    [junit] 2007-10-26 19:40:06,553 DEBUG [HMaster] 
org.apache.hadoop.hbase.HMaster.run(HMaster.java:1071): Main processing loop: 
PendingOpenOperation from 140.211.11.75:58444
    [junit] 2007-10-26 19:40:06,554 INFO  [HMaster] 
org.apache.hadoop.hbase.HMaster$PendingOpenReport.process(HMaster.java:2298): 
regionname: .META.,,1, startKey: <>, tableDesc: {name: .META., families: 
{info:={name: info, max versions: 1, compression: NONE, in memory: false, max 
length: 2147483647, bloom filter: none}}} open on 140.211.11.75:58444
    [junit] 2007-10-26 19:40:06,554 INFO  [HMaster] 
org.apache.hadoop.hbase.HMaster$PendingOpenReport.process(HMaster.java:2341): 
updating row .META.,,1 in table -ROOT-,,0 with startcode -5165221264789697286 
and server 140.211.11.75:58444
    [junit] 2007-10-26 19:40:06,557 DEBUG [HMaster] 
org.apache.hadoop.hbase.HMaster$PendingOpenReport.process(HMaster.java:2359): 
Adding regionname: .META.,,1, startKey: <.META.,,1>, server: 
140.211.11.75:58444} to regions to scan
    [junit] 2007-10-26 19:40:06,557 INFO  [HMaster.metaScanner] 
org.apache.hadoop.hbase.HMaster$BaseScanner.scanRegion(HMaster.java:209): 
HMaster.metaScanner scanning meta region regionname: .META.,,1, startKey: 
<.META.,,1>, server: 140.211.11.75:58444}
    [junit] 2007-10-26 19:40:06,604 INFO  [HMaster.metaScanner] 
org.apache.hadoop.hbase.HMaster$BaseScanner.scanRegion(HMaster.java:286): 
HMaster.metaScanner scan of meta region regionname: .META.,,1, startKey: 
<.META.,,1>, server: 140.211.11.75:58444} complete
    [junit] 2007-10-26 19:40:07,345 INFO  [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HLog.rollWriter(HLog.java:298): new log writer created 
at 
/hbase/hregion_-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/log/hlog.dat.000
    [junit] 2007-10-26 19:40:07,384 DEBUG [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:183): starting 
-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/testInsertSelectDelete 
(no reconstruction log)
    [junit] 2007-10-26 19:40:07,386 DEBUG [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:219): maximum sequence id for 
hstore 
-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/testInsertSelectDelete is 
-1
    [junit] 2007-10-26 19:40:07,387 DEBUG [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:287): Next sequence id for 
region testInsertSelectDelete,,1193427603174 is 0
    [junit] 2007-10-26 19:40:07,389 INFO  [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:313): region 
testInsertSelectDelete,,1193427603174 available
    [junit] 2007-10-26 19:40:07,391 DEBUG [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:845): Started 
memcache flush for region testInsertSelectDelete,,1193427603174. Size 0.0
    [junit] 2007-10-26 19:40:07,391 DEBUG [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:863): Finished 
memcache flush; empty snapshot
    [junit] 2007-10-26 19:40:07,392 DEBUG [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HStore.close(HStore.java:420): closed 
-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/testInsertSelectDelete
    [junit] 2007-10-26 19:40:07,392 INFO  [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:400): closed 
testInsertSelectDelete,,1193427603174
    [junit] 2007-10-26 19:40:07,392 DEBUG [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HLog.close(HLog.java:382): closing log writer in 
/hbase/hregion_-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/log
    [junit] 2007-10-26 19:40:07,406 INFO  [IPC Server handler 3 on 60000] 
org.apache.hadoop.hbase.HMaster.createTable(HMaster.java:2425): created table 
testInsertSelectDelete
    [junit] 2007-10-26 19:40:07,408 DEBUG [main] 
org.apache.hadoop.hbase.HConnectionManager$TableServers.getTableServers(HConnectionManager.java:298):
 No servers for testInsertSelectDelete. Doing a find...
    [junit] 2007-10-26 19:40:07,436 DEBUG [main] 
org.apache.hadoop.hbase.HConnectionManager$TableServers.scanOneMetaRegion(HConnectionManager.java:682):
 Found 1 region(s) for .META. at address: 140.211.11.75:58444, regioninfo: 
regionname: -ROOT-,,0, startKey: <>, tableDesc: {name: -ROOT-, families: 
{info:={name: info, max versions: 1, compression: NONE, in memory: false, max 
length: 2147483647, bloom filter: none}}}
    [junit] 2007-10-26 19:40:07,440 DEBUG [main] 
org.apache.hadoop.hbase.HConnectionManager$TableServers.scanOneMetaRegion(HConnectionManager.java:732):
 no server address for regionname: testInsertSelectDelete,,1193427603174, 
startKey: <>, tableDesc: {name: testInsertSelectDelete, families: 
{testInsertSelectDelete:={name: testInsertSelectDelete, max versions: 3, 
compression: NONE, in memory: false, max length: 2147483647, bloom filter: 
none}}}
    [junit] 2007-10-26 19:40:07,442 DEBUG [main] 
org.apache.hadoop.hbase.HConnectionManager$TableServers.scanOneMetaRegion(HConnectionManager.java:768):
 Sleeping. Table testInsertSelectDelete not currently being served.
    [junit] 2007-10-26 19:40:07,564 INFO  [IPC Server handler 2 on 60000] 
org.apache.hadoop.hbase.HMaster.assignRegionsToOneServer(HMaster.java:1741): 
assigning region testInsertSelectDelete,,1193427603174 to the only server 
140.211.11.75:58444
    [junit] 2007-10-26 19:40:07,566 INFO  [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegionServer$Worker.run(HRegionServer.java:875): 
MSG_REGION_OPEN : regionname: testInsertSelectDelete,,1193427603174, startKey: 
<>, tableDesc: {name: testInsertSelectDelete, families: 
{testInsertSelectDelete:={name: testInsertSelectDelete, max versions: 3, 
compression: NONE, in memory: false, max length: 2147483647, bloom filter: 
none}}}
    [junit] 2007-10-26 19:40:07,570 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:183): starting 
-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/testInsertSelectDelete 
(no reconstruction log)
    [junit] 2007-10-26 19:40:07,572 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HStore.<init>(HStore.java:219): maximum sequence id for 
hstore 
-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/testInsertSelectDelete is 
-1
    [junit] 2007-10-26 19:40:07,573 DEBUG [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:287): Next sequence id for 
region testInsertSelectDelete,,1193427603174 is 0
    [junit] 2007-10-26 19:40:07,575 INFO  [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegion.<init>(HRegion.java:313): region 
testInsertSelectDelete,,1193427603174 available
    [junit] 2007-10-26 19:40:08,574 INFO  [IPC Server handler 1 on 60000] 
org.apache.hadoop.hbase.HMaster.processMsgs(HMaster.java:1483): 
140.211.11.75:58444 serving testInsertSelectDelete,,1193427603174
    [junit] 2007-10-26 19:40:08,575 DEBUG [HMaster] 
org.apache.hadoop.hbase.HMaster.run(HMaster.java:1071): Main processing loop: 
PendingOpenOperation from 140.211.11.75:58444
    [junit] 2007-10-26 19:40:08,576 INFO  [HMaster] 
org.apache.hadoop.hbase.HMaster$PendingOpenReport.process(HMaster.java:2298): 
regionname: testInsertSelectDelete,,1193427603174, startKey: <>, tableDesc: 
{name: testInsertSelectDelete, families: {testInsertSelectDelete:={name: 
testInsertSelectDelete, max versions: 3, compression: NONE, in memory: false, 
max length: 2147483647, bloom filter: none}}} open on 140.211.11.75:58444
    [junit] 2007-10-26 19:40:08,576 INFO  [HMaster] 
org.apache.hadoop.hbase.HMaster$PendingOpenReport.process(HMaster.java:2341): 
updating row testInsertSelectDelete,,1193427603174 in table .META.,,1 with 
startcode -5165221264789697286 and server 140.211.11.75:58444
    [junit] 2007-10-26 19:40:12,442 DEBUG [main] 
org.apache.hadoop.hbase.HConnectionManager$TableServers.scanOneMetaRegion(HConnectionManager.java:777):
 Wake. Retry finding table testInsertSelectDelete
    [junit] 2007-10-26 19:40:12,448 DEBUG [main] 
org.apache.hadoop.hbase.HConnectionManager$TableServers.scanOneMetaRegion(HConnectionManager.java:682):
 Found 1 region(s) for testInsertSelectDelete at address: 140.211.11.75:58444, 
regioninfo: regionname: .META.,,1, startKey: <>, tableDesc: {name: .META., 
families: {info:={name: info, max versions: 1, compression: NONE, in memory: 
false, max length: 2147483647, bloom filter: none}}}
    [junit] 2007-10-26 19:40:12,449 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.dumpStdout(TestHBaseShell.java:172):
 STDOUT: Creating table... Please wait.

    [junit] 2007-10-26 19:40:12,458 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.runCommand(TestHBaseShell.java:161):
 Running command: insert into testInsertSelectDelete (testInsertSelectDelete) 
values ('testInsertSelectDelete') where row='testInsertSelectDelete';
    [junit] 2007-10-26 19:40:12,459 DEBUG [main] 
org.apache.hadoop.hbase.shell.TableFormatterFactory.<init>(TableFormatterFactory.java:65):
 Table formatter class: 
org.apache.hadoop.hbase.shell.formatter.AsciiTableFormatter
    [junit] 2007-10-26 19:40:12,470 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.dumpStdout(TestHBaseShell.java:172):
 STDOUT: 
    [junit] 2007-10-26 19:40:12,470 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.runCommand(TestHBaseShell.java:161):
 Running command: insert into testInsertSelectDelete (testInsertSelectDelete) 
values ('testInsertSelectDelete') where row="testInsertSelectDelete";
    [junit] 2007-10-26 19:40:12,470 DEBUG [main] 
org.apache.hadoop.hbase.shell.TableFormatterFactory.<init>(TableFormatterFactory.java:65):
 Table formatter class: 
org.apache.hadoop.hbase.shell.formatter.AsciiTableFormatter
    [junit] 2007-10-26 19:40:12,473 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.dumpStdout(TestHBaseShell.java:172):
 STDOUT: 
    [junit] 2007-10-26 19:40:12,473 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.runCommand(TestHBaseShell.java:161):
 Running command: insert into testInsertSelectDelete (testInsertSelectDelete) 
values ("testInsertSelectDelete") where row="testInsertSelectDelete";
    [junit] 2007-10-26 19:40:12,474 DEBUG [main] 
org.apache.hadoop.hbase.shell.TableFormatterFactory.<init>(TableFormatterFactory.java:65):
 Table formatter class: 
org.apache.hadoop.hbase.shell.formatter.AsciiTableFormatter
    [junit] 2007-10-26 19:40:12,477 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.dumpStdout(TestHBaseShell.java:172):
 STDOUT: 
    [junit] 2007-10-26 19:40:12,477 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.runCommand(TestHBaseShell.java:161):
 Running command: select "testInsertSelectDelete" from "testInsertSelectDelete" 
where row="testInsertSelectDelete";
    [junit] 2007-10-26 19:40:12,478 DEBUG [main] 
org.apache.hadoop.hbase.shell.TableFormatterFactory.<init>(TableFormatterFactory.java:65):
 Table formatter class: 
org.apache.hadoop.hbase.shell.formatter.AsciiTableFormatter
    [junit] 2007-10-26 19:40:12,707 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.dumpStdout(TestHBaseShell.java:172):
 STDOUT: 

    [junit] 2007-10-26 19:40:12,708 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.runCommand(TestHBaseShell.java:161):
 Running command: delete "testInsertSelectDelete:" from 
"testInsertSelectDelete" where row="testInsertSelectDelete";
    [junit] 2007-10-26 19:40:12,709 DEBUG [main] 
org.apache.hadoop.hbase.shell.TableFormatterFactory.<init>(TableFormatterFactory.java:65):
 Table formatter class: 
org.apache.hadoop.hbase.shell.formatter.AsciiTableFormatter
    [junit] 2007-10-26 19:40:12,712 INFO  [main] 
org.apache.hadoop.hbase.shell.TestHBaseShell.dumpStdout(TestHBaseShell.java:172):
 STDOUT: 
    [junit] 2007-10-26 19:40:12,714 DEBUG [main] 
org.apache.hadoop.hbase.LocalHBaseCluster.shutdown(LocalHBaseCluster.java:201): 
Shutting down HBase Cluster
    [junit] 2007-10-26 19:40:13,622 INFO  [HMaster] 
org.apache.hadoop.hbase.HMaster.letRegionServersShutdown(HMaster.java:1211): 
Waiting on following regionserver(s) to go down (or region server lease 
expiration, whichever happens first): [address: 140.211.11.75:58444, startcode: 
-5165221264789697286, load: (requests: 15 regions: 3)]
    [junit] 2007-10-26 19:40:13,623 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:502): Got 
regionserver stop message
    [junit] 2007-10-26 19:40:13,624 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.Leases.close(Leases.java:109): RegionServer:0 closing 
leases
    [junit] 2007-10-26 19:40:13,625 INFO  [RegionServer:0.leaseChecker] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): RegionServer:0.leaseChecker 
exiting
    [junit] 2007-10-26 19:40:13,625 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.Leases.close(Leases.java:123): RegionServer:0 closed 
leases
    [junit] 2007-10-26 19:40:13,626 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.closeAllRegions(HRegionServer.java:971): 
closing region -ROOT-,,0
    [junit] 2007-10-26 19:40:13,626 INFO  [RegionServer:0.logRoller] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): RegionServer:0.logRoller 
exiting
    [junit] 2007-10-26 19:40:13,627 INFO  [RegionServer:0.cacheFlusher] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): RegionServer:0.cacheFlusher 
exiting
    [junit] 2007-10-26 19:40:13,627 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:845): Started 
memcache flush for region -ROOT-,,0. Size 92.0
    [junit] 2007-10-26 19:40:13,628 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:874): 
Snapshotted memcache for region -ROOT-,,0 with sequence id 11 and entries 2
    [junit] 2007-10-26 19:40:13,631 INFO  
[RegionServer:0.splitOrCompactChecker] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): 
RegionServer:0.splitOrCompactChecker exiting
    [junit] 2007-10-26 19:40:13,632 INFO  [RegionServer:0.worker] 
org.apache.hadoop.hbase.HRegionServer$Worker.run(HRegionServer.java:920): 
worker thread exiting
    [junit] 2007-10-26 19:40:13,728 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.flushCacheHelper(HStore.java:505): Added 
-1211131136111139257870-26-18587217-11051-1887-41-2/info/8154541274702122107 
with sequence id 11 and size 230.0
    [junit] 2007-10-26 19:40:13,729 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:917): Finished 
memcache flush for region -ROOT-,,0 in 102ms
    [junit] 2007-10-26 19:40:13,730 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.close(HStore.java:420): closed 
-1211131136111139257870-26-18587217-11051-1887-41-2/info
    [junit] 2007-10-26 19:40:13,731 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:400): closed -ROOT-,,0
    [junit] 2007-10-26 19:40:13,731 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.closeAllRegions(HRegionServer.java:971): 
closing region .META.,,1
    [junit] 2007-10-26 19:40:13,732 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:845): Started 
memcache flush for region .META.,,1. Size 324.0
    [junit] 2007-10-26 19:40:13,732 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:874): 
Snapshotted memcache for region .META.,,1 with sequence id 12 and entries 3
    [junit] 2007-10-26 19:40:13,828 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.flushCacheHelper(HStore.java:505): Added 
-4998637-20-92-37102-4963-105-387449-87-58-9095-7951/info/1820563321398423624 
with sequence id 12 and size 476.0
    [junit] 2007-10-26 19:40:13,828 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:917): Finished 
memcache flush for region .META.,,1 in 96ms
    [junit] 2007-10-26 19:40:13,829 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.close(HStore.java:420): closed 
-4998637-20-92-37102-4963-105-387449-87-58-9095-7951/info
    [junit] 2007-10-26 19:40:13,829 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:400): closed .META.,,1
    [junit] 2007-10-26 19:40:13,829 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.closeAllRegions(HRegionServer.java:971): 
closing region testInsertSelectDelete,,1193427603174
    [junit] 2007-10-26 19:40:13,829 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:845): Started 
memcache flush for region testInsertSelectDelete,,1193427603174. Size 294.0
    [junit] 2007-10-26 19:40:13,830 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:874): 
Snapshotted memcache for region testInsertSelectDelete,,1193427603174 with 
sequence id 13 and entries 3
    [junit] 2007-10-26 19:40:13,950 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.flushCacheHelper(HStore.java:505): Added 
-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/testInsertSelectDelete/116270370282227639
 with sequence id 13 and size 371.0
    [junit] 2007-10-26 19:40:13,951 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.internalFlushcache(HRegion.java:917): Finished 
memcache flush for region testInsertSelectDelete,,1193427603174 in 122ms
    [junit] 2007-10-26 19:40:13,951 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HStore.close(HStore.java:420): closed 
-1069215-243798-755-104-40-11-936895-3938-50-42-83-49/testInsertSelectDelete
    [junit] 2007-10-26 19:40:13,952 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegion.close(HRegion.java:400): closed 
testInsertSelectDelete,,1193427603174
    [junit] 2007-10-26 19:40:13,952 DEBUG [RegionServer:0] 
org.apache.hadoop.hbase.HLog.close(HLog.java:382): closing log writer in 
/hbase/log_140.211.11.75_-5165221264789697286_58444
    [junit] 2007-10-26 19:40:13,971 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:603): telling 
master that region server is shutting down at: 140.211.11.75:58444
    [junit] 2007-10-26 19:40:13,973 DEBUG [IPC Server handler 4 on 60000] 
org.apache.hadoop.hbase.HMaster.regionServerReport(HMaster.java:1304): Region 
server 140.211.11.75:58444: MSG_REPORT_EXITING -- cancelling lease
    [junit] 2007-10-26 19:40:13,973 INFO  [IPC Server handler 4 on 60000] 
org.apache.hadoop.hbase.HMaster.cancelLease(HMaster.java:1426): Cancelling 
lease for 140.211.11.75:58444
    [junit] 2007-10-26 19:40:13,975 INFO  [HMaster.metaScanner] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): HMaster.metaScanner exiting
    [junit] 2007-10-26 19:40:13,975 INFO  [HMaster] 
org.apache.hadoop.hbase.Leases.close(Leases.java:109): HMaster closing leases
    [junit] 2007-10-26 19:40:13,975 INFO  [HMaster.rootScanner] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): HMaster.rootScanner exiting
    [junit] 2007-10-26 19:40:13,980 INFO  [HMaster.leaseChecker] 
org.apache.hadoop.hbase.Chore.run(Chore.java:62): HMaster.leaseChecker exiting
    [junit] 2007-10-26 19:40:13,981 INFO  [HMaster] 
org.apache.hadoop.hbase.Leases.close(Leases.java:123): HMaster closed leases
    [junit] 2007-10-26 19:40:13,982 INFO  [HMaster] 
org.apache.hadoop.hbase.HMaster.run(HMaster.java:1151): HMaster main thread 
exiting
    [junit] 2007-10-26 19:41:14,208 WARN  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:607): Failed to 
send exiting message to master: 
    [junit] java.net.SocketTimeoutException: timed out waiting for rpc response
    [junit]     at org.apache.hadoop.ipc.Client.call(Client.java:484)
    [junit]     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
    [junit]     at $Proxy6.regionServerReport(Unknown Source)
    [junit]     at 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:605)
    [junit]     at java.lang.Thread.run(Thread.java:595)
    [junit] 2007-10-26 19:41:14,211 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:610): stopping 
server at: 140.211.11.75:58444
    [junit] 2007-10-26 19:41:14,211 INFO  [RegionServer:0] 
org.apache.hadoop.hbase.HRegionServer.run(HRegionServer.java:615): 
RegionServer:0 exiting
    [junit] 2007-10-26 19:41:14,241 INFO  [main] 
org.apache.hadoop.hbase.LocalHBaseCluster.shutdown(LocalHBaseCluster.java:225): 
Shutdown HMaster 1 region server(s)
    [junit] 2007-10-26 19:41:14,243 INFO  [main] 
org.apache.hadoop.hbase.MiniHBaseCluster.shutdown(MiniHBaseCluster.java:233): 
Shutting down Mini DFS 
    [junit] 2007-10-26 19:41:14,904 WARN  [DataNode: 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data3,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data4]]
  org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:596): 
java.io.InterruptedIOException
    [junit]     at java.net.SocketOutputStream.socketWrite0(Native Method)
    [junit]     at 
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
    [junit]     at 
java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    [junit]     at 
org.apache.hadoop.ipc.Client$Connection$2.write(Client.java:192)
    [junit]     at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    [junit]     at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    [junit]     at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    [junit]     at 
org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:327)
    [junit]     at org.apache.hadoop.ipc.Client.call(Client.java:474)
    [junit]     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
    [junit]     at org.apache.hadoop.dfs.$Proxy1.sendHeartbeat(Unknown Source)
    [junit]     at 
org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:520)
    [junit]     at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1494)
    [junit]     at java.lang.Thread.run(Thread.java:595)

    [junit] 2007-10-26 19:41:14,904 WARN  [IPC Server handler 2 on 58422] 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:617): IPC Server handler 2 
on 58422, call sendHeartbeat(127.0.0.1:50011, 1474099400704, 12649, 
138402438420, 0, 0) from 127.0.0.1:58533: output error
    [junit] java.nio.channels.ClosedChannelException
    [junit]     at 
sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:125)
    [junit]     at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:294)
    [junit]     at 
org.apache.hadoop.ipc.SocketChannelOutputStream.flushBuffer(SocketChannelOutputStream.java:108)
    [junit]     at 
org.apache.hadoop.ipc.SocketChannelOutputStream.write(SocketChannelOutputStream.java:89)
    [junit]     at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    [junit]     at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    [junit]     at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    [junit]     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:615)
    [junit] 2007-10-26 19:41:15,223 WARN  [IPC Server handler 4 on 58422] 
org.apache.hadoop.ipc.Server$Handler.run(Server.java:617): IPC Server handler 4 
on 58422, call sendHeartbeat(127.0.0.1:50010, 1474099400704, 12649, 
138408140420, 0, 0) from 127.0.0.1:58533: output error
    [junit] java.nio.channels.ClosedChannelException
    [junit]     at 
sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:125)
    [junit]     at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:294)
    [junit]     at 
org.apache.hadoop.ipc.SocketChannelOutputStream.flushBuffer(SocketChannelOutputStream.java:108)
    [junit]     at 
org.apache.hadoop.ipc.SocketChannelOutputStream.write(SocketChannelOutputStream.java:89)
    [junit]     at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    [junit]     at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    [junit]     at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    [junit]     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:615)
    [junit] 2007-10-26 19:41:15,538 ERROR [DataNode: 
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data1,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data2]]
  org.apache.hadoop.dfs.DataNode.run(DataNode.java:1496): Exception: 
java.lang.reflect.UndeclaredThrowableException
    [junit]     at org.apache.hadoop.dfs.$Proxy1.sendHeartbeat(Unknown Source)
    [junit]     at 
org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:520)
    [junit]     at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1494)
    [junit]     at java.lang.Thread.run(Thread.java:595)
    [junit] Caused by: java.lang.InterruptedException
    [junit]     at java.lang.Object.wait(Native Method)
    [junit]     at org.apache.hadoop.ipc.Client.call(Client.java:477)
    [junit]     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
    [junit]     ... 4 more

    [junit] 2007-10-26 19:41:15,577 WARN  [EMAIL PROTECTED] 
org.apache.hadoop.dfs.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:186):
 PendingReplicationMonitor thread received exception. 
java.lang.InterruptedException: sleep interrupted
    [junit] 2007-10-26 19:41:15,933 INFO  [main] 
org.apache.hadoop.hbase.MiniHBaseCluster.shutdown(MiniHBaseCluster.java:237): 
Shutting down FileSystem
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 130.783 sec
    [junit] Running org.apache.hadoop.hbase.util.TestBase64

    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.559 sec
    [junit] Running org.apache.hadoop.hbase.util.TestKeying
    [junit] Original url http://abc:[EMAIL 
PROTECTED]/index.html?query=something#middle, Transformed url 
r:http://abc:[EMAIL PROTECTED]/index.html?query=something#middle
    [junit] Original url file:///usr/bin/java, Transformed url 
file:///usr/bin/java
    [junit] Original url dns:www.powerset.com, Transformed url 
dns:www.powerset.com
    [junit] Original url dns://dns.powerset.com/www.powerset.com, Transformed 
url r:dns://com.powerset.dns/www.powerset.com
    [junit] Original url http://one.two.three/index.html, Transformed url 
r:http://three.two.one/index.html
    [junit] Original url https://one.two.three:9443/index.html, Transformed url 
r:https://three.two.one:9443/index.html
    [junit] Original url ftp://one.two.three/index.html, Transformed url 
r:ftp://three.two.one/index.html
    [junit] Original url filename, Transformed url filename
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.069 sec
    [junit] Running org.onelab.test.TestFilter
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.463 sec

BUILD FAILED
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml
 :523: The following error occurred while executing this line:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build.xml
 :23: The following error occurred while executing this line:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build-contrib.xml
 :205: Tests failed!

Total time: 64 minutes 38 seconds
Recording fingerprints
Publishing Javadoc
Recording test results
Updating HADOOP-2100
Updating HADOOP-1210
Updating HADOOP-2103
Updating HADOOP-2088
Updating HADOOP-2096
Updating HADOOP-2098
Updating HADOOP-2046
Updating HADOOP-1848
Updating HADOOP-1642
Updating HADOOP-1622
Updating HADOOP-1857

Reply via email to