I am working on a project that uses HBase and want to develop my unit tests to
use the HBaseTestingUtility to test without a separate cluster, but I am
running into a lot problems using it. Everything compiles fine, but there my
unit tests that try to spin up a cluster blow up:
This is the basic test I am trying to run:
public class PersiaAdminTest {
private final static HBaseTestingUtility TEST_UTIL = new
HBaseTestingUtility();
@BeforeClass
public static void setUpBeforeClass() throws Exception {
TEST_UTIL.startMiniCluster(1);
}
@AfterClass
public static void tearDownAfterClass() throws Exception {
TEST_UTIL.shutdownMiniCluster();
}
@Test
public void testPersiaAdmin() throws IOException {
assertTrue(true);
}
}
My pom.xml has the following dependencies:
hadoop-core-1.0.0
hadoop-test-1.0.0
hbase-0.92.1
hbase-0.92.1 (test-jar)
junit-4.10
And the error message I am getting is: (note that when I download the 0.92.1
release tar ball, the unit tests that use HBaseTestingUtility don't have this
issue):
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.154 sec <<<
FAILURE!
edu.ucsc.srl.persia.PersiaAdminTest Time elapsed: 6.155 sec <<< ERROR!
java.io.IOException: Shutting down
at
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:203)
at
org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:76)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:523)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:503)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:451)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:438)
at
edu.ucsc.srl.persia.PersiaAdminTest.setUpBeforeClass(PersiaAdminTest.java:22)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
at
org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
at
org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
Caused by: java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMaster
at
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:157)
at
org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:198)
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:148)
at
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:185)
... 28 more
Caused by: java.lang.NoSuchMethodError:
org.codehaus.jackson.type.JavaType.isFullyTyped()Z
at org.codehaus.jackson.map.type.ArrayType.<init>(ArrayType.java:36)
at org.codehaus.jackson.map.type.ArrayType.construct(ArrayType.java:48)
at
org.codehaus.jackson.map.type.ArrayType.addCommonTypes(ArrayType.java:78)
at org.codehaus.jackson.map.type.TypeFactory.<init>(TypeFactory.java:43)
at
org.codehaus.jackson.map.type.TypeFactory.<clinit>(TypeFactory.java:15)
at org.codehaus.jackson.map.ObjectMapper.<clinit>(ObjectMapper.java:42)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.<clinit>(WritableRpcEngine.java:258)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:245)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:55)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:401)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:390)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:244)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
at
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:154)
... 31 more
=====
Below is the log. It looks like the problem is because the cluster isn't
starting up, but I'm still clueless what's wrong.
=====
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running edu.ucsc.srl.persia.PersiaAdminTest
12/04/04 12:22:09 INFO hbase.HBaseTestingUtility: Starting up minicluster with
1 master(s) and 1 regionserver(s) and 1 datanode(s)
12/04/04 12:22:09 INFO util.GSet: VM type = 64-bit
12/04/04 12:22:09 INFO util.GSet: 2% max memory = 53.51125 MB
12/04/04 12:22:09 INFO util.GSet: capacity = 2^23 = 8388608 entries
12/04/04 12:22:09 INFO util.GSet: recommended=8388608, actual=8388608
12/04/04 12:22:09 INFO namenode.FSNamesystem: fsOwner=nwatkins
12/04/04 12:22:09 INFO namenode.FSNamesystem: supergroup=supergroup
12/04/04 12:22:09 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/04/04 12:22:09 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/04/04 12:22:09 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/04/04 12:22:09 INFO namenode.NameNode: Caching file names occuring more than
10 times
12/04/04 12:22:09 INFO common.Storage: Image file of size 114 saved in 0
seconds.
12/04/04 12:22:10 INFO common.Storage: Storage directory
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/name1
has been successfully formatted.
12/04/04 12:22:10 INFO common.Storage: Image file of size 114 saved in 0
seconds.
12/04/04 12:22:10 INFO common.Storage: Storage directory
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/name2
has been successfully formatted.
12/04/04 12:22:10 WARN impl.MetricsSystemImpl: Metrics system not started:
Cannot locate configuration: tried hadoop-metrics2-namenode.properties,
hadoop-metrics2.properties
12/04/04 12:22:10 INFO util.GSet: VM type = 64-bit
12/04/04 12:22:10 INFO util.GSet: 2% max memory = 53.51125 MB
12/04/04 12:22:10 INFO util.GSet: capacity = 2^23 = 8388608 entries
12/04/04 12:22:10 INFO util.GSet: recommended=8388608, actual=8388608
12/04/04 12:22:10 INFO namenode.FSNamesystem: fsOwner=nwatkins
12/04/04 12:22:10 INFO namenode.FSNamesystem: supergroup=supergroup
12/04/04 12:22:10 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/04/04 12:22:10 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/04/04 12:22:10 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/04/04 12:22:10 INFO namenode.FSNamesystem: Registered FSNamesystemStateMBean
and NameNodeMXBean
12/04/04 12:22:10 INFO namenode.NameNode: Caching file names occuring more than
10 times
12/04/04 12:22:10 INFO common.Storage: Number of files = 1
12/04/04 12:22:10 INFO common.Storage: Number of files under construction = 0
12/04/04 12:22:10 INFO common.Storage: Image file of size 114 loaded in 0
seconds.
12/04/04 12:22:10 INFO common.Storage: Edits file
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/name1/current/edits
of size 4 edits # 0 loaded in 0 seconds.
12/04/04 12:22:10 INFO common.Storage: Image file of size 114 saved in 0
seconds.
12/04/04 12:22:11 INFO common.Storage: Image file of size 114 saved in 0
seconds.
12/04/04 12:22:11 INFO common.Storage: Image file of size 114 saved in 0
seconds.
12/04/04 12:22:11 INFO common.Storage: Image file of size 114 saved in 0
seconds.
12/04/04 12:22:12 INFO namenode.NameCache: initialized with 0 entries 0 lookups
12/04/04 12:22:12 INFO namenode.FSNamesystem: Finished loading FSImage in 1564
msecs
12/04/04 12:22:12 INFO namenode.FSNamesystem: Total number of blocks = 0
12/04/04 12:22:12 INFO namenode.FSNamesystem: Number of invalid blocks = 0
12/04/04 12:22:12 INFO namenode.FSNamesystem: Number of under-replicated blocks
= 0
12/04/04 12:22:12 INFO namenode.FSNamesystem: Number of over-replicated blocks
= 0
12/04/04 12:22:12 INFO hdfs.StateChange: STATE* Safe mode termination scan for
invalid, over- and under-replicated blocks completed in 16 msec
12/04/04 12:22:12 INFO hdfs.StateChange: STATE* Leaving safe mode after 1 secs.
12/04/04 12:22:12 INFO hdfs.StateChange: STATE* Network topology has 0 racks
and 0 datanodes
12/04/04 12:22:12 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0
blocks
12/04/04 12:22:12 INFO util.HostsFileReader: Refreshing hosts (include/exclude)
list
12/04/04 12:22:12 INFO namenode.FSNamesystem: ReplicateQueue
QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
12/04/04 12:22:12 INFO namenode.FSNamesystem: ReplicateQueue
QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing
time, 0 msec clock time, 1 cycles
12/04/04 12:22:12 INFO namenode.FSNamesystem: InvalidateQueue
QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
12/04/04 12:22:12 INFO namenode.FSNamesystem: InvalidateQueue
QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing
time, 0 msec clock time, 1 cycles
12/04/04 12:22:12 INFO ipc.Server: Starting SocketReader
12/04/04 12:22:12 INFO namenode.NameNode: Namenode up at:
localhost/127.0.0.1:40677
12/04/04 12:22:12 INFO mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
12/04/04 12:22:12 INFO http.HttpServer: Added global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
12/04/04 12:22:12 INFO http.HttpServer: dfs.webhdfs.enabled = false
12/04/04 12:22:12 INFO http.HttpServer: Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the
listener on 0
12/04/04 12:22:12 INFO http.HttpServer: listener.getLocalPort() returned 38803
webServer.getConnectors()[0].getLocalPort() returned 38803
12/04/04 12:22:12 INFO http.HttpServer: Jetty bound to port 38803
12/04/04 12:22:12 INFO mortbay.log: jetty-6.1.26
12/04/04 12:22:12 INFO mortbay.log: Extract
jar:file:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/hdfs
to /tmp/Jetty_localhost_38803_hdfs____n6pj6o/webapp
12/04/04 12:22:12 INFO mortbay.log: Started
SelectChannelConnector@localhost:38803
12/04/04 12:22:12 INFO namenode.NameNode: Web-server up at: localhost:38803
12/04/04 12:22:12 INFO ipc.Server: IPC Server Responder: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server listener on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 0 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 1 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 2 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 3 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 4 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 5 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 6 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 8 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 9 on 40677: starting
12/04/04 12:22:12 INFO ipc.Server: IPC Server handler 7 on 40677: starting
Starting DataNode 0 with dfs.data.dir:
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data1,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data2
12/04/04 12:22:12 WARN impl.MetricsSystemImpl: Metrics system not started:
Cannot locate configuration: tried hadoop-metrics2-datanode.properties,
hadoop-metrics2.properties
12/04/04 12:22:12 INFO common.Storage: Storage directory
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data1
is not formatted.
12/04/04 12:22:12 INFO common.Storage: Formatting ...
12/04/04 12:22:13 INFO common.Storage: Storage directory
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data2
is not formatted.
12/04/04 12:22:13 INFO common.Storage: Formatting ...
12/04/04 12:22:13 INFO datanode.DataNode: Registered FSDatasetStatusMBean
12/04/04 12:22:13 INFO datanode.DataNode: Opened info server at 46482
12/04/04 12:22:13 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s
12/04/04 12:22:13 INFO http.HttpServer: Added global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
12/04/04 12:22:13 INFO datanode.DataNode: dfs.webhdfs.enabled = false
12/04/04 12:22:13 INFO http.HttpServer: Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the
listener on 0
12/04/04 12:22:13 INFO http.HttpServer: listener.getLocalPort() returned 56944
webServer.getConnectors()[0].getLocalPort() returned 56944
12/04/04 12:22:13 INFO http.HttpServer: Jetty bound to port 56944
12/04/04 12:22:13 INFO mortbay.log: jetty-6.1.26
12/04/04 12:22:13 INFO mortbay.log: Extract
jar:file:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/datanode
to /tmp/Jetty_localhost_56944_datanode____ec2djz/webapp
12/04/04 12:22:13 INFO mortbay.log: Started
SelectChannelConnector@localhost:56944
12/04/04 12:22:13 INFO ipc.Server: Starting SocketReader
12/04/04 12:22:13 INFO datanode.DataNode: dnRegistration =
DatanodeRegistration(127.0.0.1:46482, storageID=, infoPort=56944, ipcPort=57074)
12/04/04 12:22:13 INFO hdfs.StateChange: BLOCK* NameSystem.registerDatanode:
node registration from 127.0.0.1:46482 storage
DS-878374159-127.0.0.1-46482-1333567333542
12/04/04 12:22:13 INFO net.NetworkTopology: Adding a new node:
/default-rack/127.0.0.1:46482
12/04/04 12:22:13 INFO datanode.DataNode: New storage id
DS-878374159-127.0.0.1-46482-1333567333542 is assigned to data-node
127.0.0.1:46482
12/04/04 12:22:13 INFO datanode.DataNode: Finished generating blocks being
written report for 2 volumes in 0 seconds
12/04/04 12:22:13 INFO hdfs.StateChange: *BLOCK*
NameNode.blocksBeingWrittenReport: from 127.0.0.1:46482 0 blocks
12/04/04 12:22:13 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:46482,
storageID=DS-878374159-127.0.0.1-46482-1333567333542, infoPort=56944,
ipcPort=57074)In DataNode.run, data =
FSDataset{dirpath='/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data1/current,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data2/current'}
12/04/04 12:22:13 INFO ipc.Server: IPC Server Responder: starting
12/04/04 12:22:13 INFO ipc.Server: IPC Server listener on 57074: starting
12/04/04 12:22:13 INFO datanode.DataNode: using BLOCKREPORT_INTERVAL of
3600000msec Initial delay: 0msec
12/04/04 12:22:13 INFO ipc.Server: IPC Server handler 2 on 57074: starting
12/04/04 12:22:13 INFO ipc.Server: IPC Server handler 0 on 57074: starting
12/04/04 12:22:13 INFO ipc.Server: IPC Server handler 1 on 57074: starting
12/04/04 12:22:13 INFO hdfs.StateChange: *BLOCK* NameSystem.processReport: from
127.0.0.1:46482, blocks: 0, processing time: 2 msecs
12/04/04 12:22:13 INFO datanode.DataNode: BlockReport of 0 blocks took 4 msec
to generate and 14 msecs for RPC and NN processing
12/04/04 12:22:13 INFO datanode.DataNode: Starting Periodic block scanner.
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:host.name=localhost
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:java.version=1.6.0_23
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:java.vendor=Sun Microsystems Inc.
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:java.home=/usr/lib/jvm/java-6-openjdk/jre
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:java.class.path=/home/nwatkins/projects/ssdb-hadoop/persia/target/test-classes:/home/nwatkins/projects/ssdb-hadoop/persia/target/classes:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar:/home/nwatkins/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/nwatkins/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/nwatkins/.m2/repository/commons-httpclient/commons-httpclient/3.0.1/commons-httpclient-3.0.1.jar:/home/nwatkins/.m2/repository/commons-codec/commons-codec/1.4/commons-codec-1.4.jar:/home/nwatkins/.m2/repository/org/apache/commons/commons-math/2.1/commons-math-2.1.jar:/home/nwatkins/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/nwatkins/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/nwatkins/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/nwatkins/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/nwatkins/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/nwatkins/.m2/repository/commons-net/commons-net/1.4.1/commons-net-1.4.1.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/servlet-api/2.5-20081211/servlet-api-2.5-20081211.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/nwatkins/.m2/repository/tomcat/jasper-runtime/5.5.12/jasper-runtime-5.5.12.jar:/home/nwatkins/.m2/repository/tomcat/jasper-compiler/5.5.12/jasper-compiler-5.5.12.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/nwatkins/.m2/repository/ant/ant/1.6.5/ant-1.6.5.jar:/home/nwatkins/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/nwatkins/.m2/repository/net/java/dev/jets3t/jets3t/0.7.1/jets3t-0.7.1.jar:/home/nwatkins/.m2/repository/net/sf/kosmosfs/kfs/0.3/kfs-0.3.jar:/home/nwatkins/.m2/repository/hsqldb/hsqldb/1.8.0.10/hsqldb-1.8.0.10.jar:/home/nwatkins/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/nwatkins/.m2/repository/org/eclipse/jdt/core/3.1.1/core-3.1.1.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.0.1/jackson-mapper-asl-1.0.1.jar:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1.jar:/home/nwatkins/.m2/repository/com/google/guava/guava/r09/guava-r09.jar:/home/nwatkins/.m2/repository/com/github/stephenc/high-scale-lib/high-scale-lib/1.1.1/high-scale-lib-1.1.1.jar:/home/nwatkins/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/nwatkins/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/nwatkins/.m2/repository/log4j/log4j/1.2.16/log4j-1.2.16.jar:/home/nwatkins/.m2/repository/org/apache/avro/avro/1.5.3/avro-1.5.3.jar:/home/nwatkins/.m2/repository/org/xerial/snappy/snappy-java/1.0.3.2/snappy-java-1.0.3.2.jar:/home/nwatkins/.m2/repository/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar:/home/nwatkins/.m2/repository/org/jboss/netty/netty/3.2.4.Final/netty-3.2.4.Final.jar:/home/nwatkins/.m2/repository/org/apache/velocity/velocity/1.7/velocity-1.7.jar:/home/nwatkins/.m2/repository/org/apache/zookeeper/zookeeper/3.4.3/zookeeper-3.4.3.jar:/home/nwatkins/.m2/repository/org/apache/thrift/libthrift/0.7.0/libthrift-0.7.0.jar:/home/nwatkins/.m2/repository/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/home/nwatkins/.m2/repository/org/apache/httpcomponents/httpclient/4.0.1/httpclient-4.0.1.jar:/home/nwatkins/.m2/repository/org/apache/httpcomponents/httpcore/4.0.1/httpcore-4.0.1.jar:/home/nwatkins/.m2/repository/org/jruby/jruby-complete/1.6.5/jruby-complete-1.6.5.jar:/home/nwatkins/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.5.5/jackson-core-asl-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.5.5/jackson-jaxrs-1.5.5.jar:/home/nwatkins/.m2/repository/org/codehaus/jackson/jackson-xc/1.5.5/jackson-xc-1.5.5.jar:/home/nwatkins/.m2/repository/org/slf4j/slf4j-api/1.5.8/slf4j-api-1.5.8.jar:/home/nwatkins/.m2/repository/org/slf4j/slf4j-log4j12/1.5.8/slf4j-log4j12-1.5.8.jar:/home/nwatkins/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/nwatkins/.m2/repository/com/google/protobuf/protobuf-java/2.4.0a/protobuf-java-2.4.0a.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-core/1.4/jersey-core-1.4.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-json/1.4/jersey-json-1.4.jar:/home/nwatkins/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/nwatkins/.m2/repository/com/sun/xml/bind/jaxb-impl/2.1.12/jaxb-impl-2.1.12.jar:/home/nwatkins/.m2/repository/com/sun/jersey/jersey-server/1.4/jersey-server-1.4.jar:/home/nwatkins/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/nwatkins/.m2/repository/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar:/home/nwatkins/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/nwatkins/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/nwatkins/.m2/repository/junit/junit/4.10/junit-4.10.jar:/home/nwatkins/.m2/repository/org/hamcrest/hamcrest-core/1.1/hamcrest-core-1.1.jar:/home/nwatkins/.m2/repository/org/apache/hbase/hbase/0.92.1/hbase-0.92.1-tests.jar:/home/nwatkins/.m2/repository/org/apache/hadoop/hadoop-test/1.0.0/hadoop-test-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftplet-api/1.0.0/ftplet-api-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/mina/mina-core/2.0.0-M5/mina-core-2.0.0-M5.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftpserver-core/1.0.0/ftpserver-core-1.0.0.jar:/home/nwatkins/.m2/repository/org/apache/ftpserver/ftpserver-deprecated/1.0.0-M2/ftpserver-deprecated-1.0.0-M2.jar:
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:java.library.path=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64:/home/nwatkins/projects/openmpi_java/install/lib::/usr/java/packages/lib/amd64:/usr/lib/jni:/lib:/usr/lib
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:java.io.tmpdir=/tmp
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:java.compiler=<NA>
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:os.name=Linux
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server environment:os.arch=amd64
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:os.version=3.0.0-12-server
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:user.name=nwatkins
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:user.home=/home/nwatkins
12/04/04 12:22:13 INFO server.ZooKeeperServer: Server
environment:user.dir=/home/nwatkins/projects/ssdb-hadoop/persia
12/04/04 12:22:13 INFO server.ZooKeeperServer: Created server with tickTime
2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/zookeeper_0/version-2
snapdir
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/zookeeper_0/version-2
12/04/04 12:22:13 INFO server.NIOServerCnxnFactory: binding to port
0.0.0.0/0.0.0.0:21818
12/04/04 12:22:13 INFO zookeeper.MiniZooKeeperCluster: Failed binding ZK Server
to client port: 21818
12/04/04 12:22:13 INFO server.NIOServerCnxnFactory: binding to port
0.0.0.0/0.0.0.0:21819
12/04/04 12:22:13 INFO persistence.FileTxnSnapLog: Snapshotting: 0x0 to
/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/zookeeper_0/version-2/snapshot.0
12/04/04 12:22:13 INFO server.NIOServerCnxnFactory: Accepted socket connection
from /127.0.0.1:40465
12/04/04 12:22:13 INFO server.NIOServerCnxn: Processing stat command from
/127.0.0.1:40465
12/04/04 12:22:13 INFO server.NIOServerCnxn: Stat command output
12/04/04 12:22:13 INFO server.NIOServerCnxn: Closed socket connection for
client /127.0.0.1:40465 (no session established for client)
12/04/04 12:22:13 INFO zookeeper.MiniZooKeeperCluster: Started MiniZK Cluster
and connect 1 ZK server on client port: 21819
12/04/04 12:22:13 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock:
/user/nwatkins/hbase/hbase.version. blk_-6717307937846793406_1001
12/04/04 12:22:13 INFO datanode.DataNode: Receiving block
blk_-6717307937846793406_1001 src: /127.0.0.1:54917 dest: /127.0.0.1:46482
12/04/04 12:22:13 INFO DataNode.clienttrace: src: /127.0.0.1:54917, dest:
/127.0.0.1:46482, bytes: 3, op: HDFS_WRITE, cliID: DFSClient_-1337989537,
offset: 0, srvID: DS-878374159-127.0.0.1-46482-1333567333542, blockid:
blk_-6717307937846793406_1001, duration: 1127639
12/04/04 12:22:13 INFO datanode.DataNode: PacketResponder 0 for block
blk_-6717307937846793406_1001 terminating
12/04/04 12:22:13 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: 127.0.0.1:46482 is added to blk_-6717307937846793406_1001
size 3
12/04/04 12:22:14 INFO hdfs.StateChange: Removing lease on file
/user/nwatkins/hbase/hbase.version from client DFSClient_-1337989537
12/04/04 12:22:14 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file
/user/nwatkins/hbase/hbase.version is closed by DFSClient_-1337989537
12/04/04 12:22:14 ERROR hbase.MiniHBaseCluster: Error starting cluster
java.lang.RuntimeException: Failed construction of Master: class
org.apache.hadoop.hbase.master.HMaster
at
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:157)
at
org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:198)
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:148)
at
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:185)
at
org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:76)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:523)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:503)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:451)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:438)
at
edu.ucsc.srl.persia.PersiaAdminTest.setUpBeforeClass(PersiaAdminTest.java:22)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
at
org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
at
org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
Caused by: java.lang.NoSuchMethodError:
org.codehaus.jackson.type.JavaType.isFullyTyped()Z
at org.codehaus.jackson.map.type.ArrayType.<init>(ArrayType.java:36)
at org.codehaus.jackson.map.type.ArrayType.construct(ArrayType.java:48)
at
org.codehaus.jackson.map.type.ArrayType.addCommonTypes(ArrayType.java:78)
at org.codehaus.jackson.map.type.TypeFactory.<init>(TypeFactory.java:43)
at
org.codehaus.jackson.map.type.TypeFactory.<clinit>(TypeFactory.java:15)
at org.codehaus.jackson.map.ObjectMapper.<clinit>(ObjectMapper.java:42)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.<clinit>(WritableRpcEngine.java:258)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:245)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:55)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:401)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:390)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:244)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
at
org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:154)
... 31 more
12/04/04 12:22:14 INFO hbase.HBaseTestingUtility: Shutting down minicluster
12/04/04 12:22:14 INFO server.NIOServerCnxnFactory: NIOServerCnxn factory
exited run method
12/04/04 12:22:14 INFO server.ZooKeeperServer: shutting down
12/04/04 12:22:14 INFO server.SessionTrackerImpl: Shutting down
12/04/04 12:22:14 INFO server.PrepRequestProcessor: Shutting down
12/04/04 12:22:14 INFO server.SyncRequestProcessor: Shutting down
12/04/04 12:22:14 INFO server.PrepRequestProcessor: PrepRequestProcessor exited
loop!
12/04/04 12:22:14 INFO server.SyncRequestProcessor: SyncRequestProcessor exited!
12/04/04 12:22:14 INFO server.FinalRequestProcessor: shutdown of request
processor complete
12/04/04 12:22:14 INFO zookeeper.MiniZooKeeperCluster: Shutdown MiniZK cluster
with all ZK servers
Shutting down the Mini HDFS Cluster
Shutting down DataNode 0
12/04/04 12:22:14 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
12/04/04 12:22:14 INFO ipc.Server: Stopping server on 57074
12/04/04 12:22:14 INFO ipc.Server: IPC Server handler 0 on 57074: exiting
12/04/04 12:22:14 INFO ipc.Server: IPC Server handler 2 on 57074: exiting
12/04/04 12:22:14 INFO ipc.Server: Stopping IPC Server listener on 57074
12/04/04 12:22:14 INFO ipc.Server: IPC Server handler 1 on 57074: exiting
12/04/04 12:22:14 INFO ipc.Server: Stopping IPC Server Responder
12/04/04 12:22:14 INFO metrics.RpcInstrumentation: shut down
12/04/04 12:22:14 INFO datanode.DataNode: Waiting for threadgroup to exit,
active threads is 1
12/04/04 12:22:14 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:46482,
storageID=DS-878374159-127.0.0.1-46482-1333567333542, infoPort=56944,
ipcPort=57074):DataXceiveServer:java.nio.channels.AsynchronousCloseException
at
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:170)
at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:102)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
at java.lang.Thread.run(Thread.java:679)
12/04/04 12:22:14 INFO datanode.DataNode: Exiting DataXceiveServer
12/04/04 12:22:14 INFO datanode.DataBlockScanner: Exiting DataBlockScanner
thread.
12/04/04 12:22:15 INFO datanode.DataNode: Waiting for threadgroup to exit,
active threads is 0
12/04/04 12:22:15 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:46482,
storageID=DS-878374159-127.0.0.1-46482-1333567333542, infoPort=56944,
ipcPort=57074):Finishing DataNode in:
FSDataset{dirpath='/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data1/current,/home/nwatkins/projects/ssdb-hadoop/persia/target/test-data/b2e46a19-cdef-4e2c-ae76-c23669e495b9/dfscluster_52609245-10c2-4269-83ad-ce1258a5abc6/dfs/data/data2/current'}
12/04/04 12:22:15 WARN util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
javax.management.InstanceNotFoundException:
Hadoop:service=DataNode,name=DataNodeInfo
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1118)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
at
com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:540)
at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:513)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:726)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1442)
at java.lang.Thread.run(Thread.java:679)
12/04/04 12:22:15 INFO ipc.Server: Stopping server on 57074
12/04/04 12:22:15 INFO metrics.RpcInstrumentation: shut down
12/04/04 12:22:15 INFO datanode.DataNode: Waiting for threadgroup to exit,
active threads is 0
12/04/04 12:22:15 INFO datanode.FSDatasetAsyncDiskService: Shutting down all
async disk service threads...
12/04/04 12:22:15 INFO datanode.FSDatasetAsyncDiskService: All async disk
service threads have been shut down.
12/04/04 12:22:15 WARN util.MBeans:
Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId1046794048
javax.management.InstanceNotFoundException:
Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId1046794048
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1118)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:433)
at
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:421)
at
com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:540)
at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
at
org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566)
at
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550)
at
org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:350)
at
org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:576)
at
edu.ucsc.srl.persia.PersiaAdminTest.tearDownAfterClass(PersiaAdminTest.java:27)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:36)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
at
org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
at
org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:107)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
12/04/04 12:22:15 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has
already shut down.
12/04/04 12:22:15 INFO mortbay.log: Stopped SelectChannelConnector@localhost:0
12/04/04 12:22:15 INFO namenode.DecommissionManager: Interrupted Monitor
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at
org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
at java.lang.Thread.run(Thread.java:679)
12/04/04 12:22:15 INFO namenode.FSNamesystem: Number of transactions: 6 Total
time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number
of syncs: 4 SyncTimes(ms): 96 44
12/04/04 12:22:15 WARN namenode.FSNamesystem: ReplicationMonitor thread
received InterruptedException.java.lang.InterruptedException: sleep interrupted
12/04/04 12:22:15 INFO ipc.Server: Stopping server on 40677
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 0 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 3 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 1 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 4 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 2 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 8 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 9 on 40677: exiting
12/04/04 12:22:15 INFO metrics.RpcInstrumentation: shut down
12/04/04 12:22:15 INFO ipc.Server: Stopping IPC Server listener on 40677
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 6 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 5 on 40677: exiting
12/04/04 12:22:15 INFO ipc.Server: Stopping IPC Server Responder
12/04/04 12:22:15 INFO ipc.Server: IPC Server handler 7 on 40677: exiting
12/04/04 12:22:15 INFO hbase.HBaseTestingUtility: Minicluster is down