[
https://issues.apache.org/jira/browse/HBASE-13294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14372477#comment-14372477
]
Srikanth Srungarapu commented on HBASE-13294:
---------------------------------------------
Looking at the cause for test failures in jenkins, there seems to be a problem
spinning up cluster. The test is running well on local box. Re-attaching the
patch for another try.
{code}
Failed
org.apache.hadoop.hbase.security.access.TestAccessController.org.apache.hadoop.hbase.security.access.TestAccessController
Failing for the past 1 build (Since Failed#13344 )
Took 6 ms.
Error Message
Shutting down
Stacktrace
java.io.IOException: Shutting down
at
org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:225)
at
org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:437)
at
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:224)
at
org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:93)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1008)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:968)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:842)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:836)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:780)
at
org.apache.hadoop.hbase.security.access.TestAccessController.setupBeforeClass(TestAccessController.java:191)
Standard Output
Formatting using clusterid: testClusterID
Process Thread Dump: Thread dump because: Master not initialized after 200000ms
seconds
70 active threads
Thread 287 (IPC Parameter Sending Thread #1):
State: TIMED_WAITING
Blocked count: 0
Waited count: 66
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:744)
Thread 263 (HBase-Metrics2-1):
State: TIMED_WAITING
Blocked count: 0
Waited count: 79
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:744)
Thread 189 (AsyncRpcChannel-timer-pool1-t1):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1986
Stack:
java.lang.Thread.sleep(Native Method)
io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:461)
io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:360)
java.lang.Thread.run(Thread.java:744)
Thread 190 (AsyncRpcChannel-pool2-t1):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
java.lang.Thread.run(Thread.java:744)
Thread 251 (snapshot-hfile-cleaner-cache-refresher):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1
Stack:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
Thread 250 (snapshot-log-cleaner-cache-refresher):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1
Stack:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
Thread 248 (proserpina:38530.activeMasterManager-EventThread):
State: WAITING
Blocked count: 0
Waited count: 3
Waiting on
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@707a9713
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
Thread 247 (proserpina:38530.activeMasterManager-SendThread(localhost:55239)):
State: RUNNABLE
Blocked count: 2
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349)
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Thread 155 (org.apache.hadoop.hdfs.PeerCache@37f28e66):
State: TIMED_WAITING
Blocked count: 0
Waited count: 67
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:244)
org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:41)
org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:119)
java.lang.Thread.run(Thread.java:744)
Thread 140 (Thread-102):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hbase.master.HMaster$InitializationMonitor.run(HMaster.java:197)
java.lang.Thread.run(Thread.java:744)
Thread 93 (ProcessThread(sid:0 cport:-1):):
State: WAITING
Blocked count: 0
Waited count: 158
Waiting on
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3c0bc4b5
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:120)
Thread 92 (SyncThread:0):
State: WAITING
Blocked count: 0
Waited count: 128
Waiting on
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@289f65df
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:127)
Thread 91 (SessionTracker):
State: TIMED_WAITING
Blocked count: 0
Waited count: 103
Stack:
java.lang.Object.wait(Native Method)
org.apache.zookeeper.server.SessionTrackerImpl.run(SessionTrackerImpl.java:146)
Thread 90 (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:55239):
State: RUNNABLE
Blocked count: 3
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:178)
java.lang.Thread.run(Thread.java:744)
Thread 87
(refreshUsed-/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa/dfs/data/data2/current/BP-27234177-67.195.81.189-1426903856367):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
java.lang.Thread.run(Thread.java:744)
Thread 86
(refreshUsed-/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa/dfs/data/data1/current/BP-27234177-67.195.81.189-1426903856367):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115)
java.lang.Thread.run(Thread.java:744)
Thread 80 (java.util.concurrent.ThreadPoolExecutor$Worker@4b35baa7[State = -1,
empty queue]):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:744)
Thread 77 (Thread-62):
State: TIMED_WAITING
Blocked count: 0
Waited count: 41
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:78)
java.lang.Thread.run(Thread.java:744)
Thread 71 (IPC Client (1743776012) connection to localhost/127.0.0.1:42912 from
jenkins):
State: TIMED_WAITING
Blocked count: 83
Waited count: 83
Stack:
java.lang.Object.wait(Native Method)
org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:904)
org.apache.hadoop.ipc.Client$Connection.run(Client.java:949)
Thread 70 (IPC Server handler 9 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 252
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 69 (IPC Server handler 8 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 247
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 68 (IPC Server handler 7 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 214
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 67 (IPC Server handler 6 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 246
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 66 (IPC Server handler 5 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 256
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 65 (IPC Server handler 4 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 229
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 64 (IPC Server handler 3 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 255
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 63 (IPC Server handler 2 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 242
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 62 (IPC Server handler 1 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 236
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 61 (IPC Server handler 0 on 34438):
State: TIMED_WAITING
Blocked count: 0
Waited count: 238
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 56 (IPC Server listener on 34438):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
org.apache.hadoop.ipc.Server$Listener.run(Server.java:668)
Thread 59 (IPC Server Responder):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:835)
org.apache.hadoop.ipc.Server$Responder.run(Server.java:818)
Thread 51 (org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@599556aa):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100)
org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:134)
org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:137)
java.lang.Thread.run(Thread.java:744)
Thread 60 (DataNode:
[[[DISK]file:/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa/dfs/data/data1/,
[DISK]file:/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:42912):
State: TIMED_WAITING
Blocked count: 128
Waited count: 245
Stack:
java.lang.Object.wait(Native Method)
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:720)
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:834)
java.lang.Thread.run(Thread.java:744)
Thread 58 (IPC Server idle connection scanner for port 34438):
State: TIMED_WAITING
Blocked count: 1
Waited count: 22
Stack:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
Thread 57 (Socket Reader #1 for port 34438):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:616)
org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:595)
Thread 55 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@3c65666c):
State: TIMED_WAITING
Blocked count: 0
Waited count: 411
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
java.lang.Thread.run(Thread.java:744)
Thread 54 (Timer-1):
State: TIMED_WAITING
Blocked count: 0
Waited count: 7
Stack:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
Thread 53 (743692891@qtp-424890708-1 - Acceptor0
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37974):
State: RUNNABLE
Blocked count: 1
Waited count: 1
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Thread 52 (1114171041@qtp-424890708-0):
State: TIMED_WAITING
Blocked count: 0
Waited count: 4
Stack:
java.lang.Object.wait(Native Method)
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
Thread 46 (CacheReplicationMonitor(1637185326)):
State: TIMED_WAITING
Blocked count: 0
Waited count: 7
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181)
Thread 45
(org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@7cc8b288):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:4726)
java.lang.Thread.run(Thread.java:744)
Thread 44
(org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1bbbfb71):
State: TIMED_WAITING
Blocked count: 0
Waited count: 42
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:4687)
java.lang.Thread.run(Thread.java:744)
Thread 43
(org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@57b3fe3c):
State: TIMED_WAITING
Blocked count: 0
Waited count: 104
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:392)
java.lang.Thread.run(Thread.java:744)
Thread 42 (IPC Server handler 9 on 42912):
State: TIMED_WAITING
Blocked count: 0
Waited count: 239
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 41 (IPC Server handler 8 on 42912):
State: TIMED_WAITING
Blocked count: 0
Waited count: 213
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 40 (IPC Server handler 7 on 42912):
State: TIMED_WAITING
Blocked count: 0
Waited count: 260
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 39 (IPC Server handler 6 on 42912):
State: TIMED_WAITING
Blocked count: 0
Waited count: 213
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 38 (IPC Server handler 5 on 42912):
State: TIMED_WAITING
Blocked count: 1
Waited count: 214
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 37 (IPC Server handler 4 on 42912):
State: TIMED_WAITING
Blocked count: 0
Waited count: 213
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 36 (IPC Server handler 3 on 42912):
State: TIMED_WAITING
Blocked count: 0
Waited count: 213
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 35 (IPC Server handler 2 on 42912):
State: TIMED_WAITING
Blocked count: 0
Waited count: 214
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 34 (IPC Server handler 1 on 42912):
State: TIMED_WAITING
Blocked count: 0
Waited count: 214
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 33 (IPC Server handler 0 on 42912):
State: TIMED_WAITING
Blocked count: 12
Waited count: 243
Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:109)
org.apache.hadoop.ipc.Server$Handler.run(Server.java:1985)
Thread 23 (IPC Server listener on 42912):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
org.apache.hadoop.ipc.Server$Listener.run(Server.java:668)
Thread 26 (IPC Server Responder):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
org.apache.hadoop.ipc.Server$Responder.doRunLoop(Server.java:835)
org.apache.hadoop.ipc.Server$Responder.run(Server.java:818)
Thread 21
(org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@6b7dcfcd):
State: TIMED_WAITING
Blocked count: 0
Waited count: 70
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3397)
java.lang.Thread.run(Thread.java:744)
Thread 22
(org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@10d5d2dd):
State: TIMED_WAITING
Blocked count: 0
Waited count: 42
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:319)
java.lang.Thread.run(Thread.java:744)
Thread 32
(org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@10c8414a):
State: TIMED_WAITING
Blocked count: 0
Waited count: 70
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor.run(DecommissionManager.java:76)
java.lang.Thread.run(Thread.java:744)
Thread 31
(org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@4ba6ee31):
State: TIMED_WAITING
Blocked count: 0
Waited count: 1
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221)
java.lang.Thread.run(Thread.java:744)
Thread 27 (org.apache.hadoop.util.JvmPauseMonitor$Monitor@3301378d):
State: TIMED_WAITING
Blocked count: 1
Waited count: 412
Stack:
java.lang.Thread.sleep(Native Method)
org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:180)
java.lang.Thread.run(Thread.java:744)
Thread 25 (IPC Server idle connection scanner for port 42912):
State: TIMED_WAITING
Blocked count: 1
Waited count: 22
Stack:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
Thread 24 (Socket Reader #1 for port 42912):
State: RUNNABLE
Blocked count: 1
Waited count: 0
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:616)
org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:595)
Thread 20 (Timer-0):
State: TIMED_WAITING
Blocked count: 1
Waited count: 8
Stack:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
Thread 19 (108697051@qtp-1424309400-1 - Acceptor0
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46463):
State: RUNNABLE
Blocked count: 1
Waited count: 1
Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498)
org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Thread 18 (2111490614@qtp-1424309400-0):
State: TIMED_WAITING
Blocked count: 0
Waited count: 4
Stack:
java.lang.Object.wait(Native Method)
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
Thread 16 (Timer for 'NameNode' metrics system):
State: TIMED_WAITING
Blocked count: 0
Waited count: 21
Stack:
java.lang.Object.wait(Native Method)
java.util.TimerThread.mainLoop(Timer.java:552)
java.util.TimerThread.run(Timer.java:505)
Thread 4 (Signal Dispatcher):
State: RUNNABLE
Blocked count: 0
Waited count: 0
Stack:
Thread 3 (Finalizer):
State: WAITING
Blocked count: 34
Waited count: 3
Waiting on java.lang.ref.ReferenceQueue$Lock@7ce28a19
Stack:
java.lang.Object.wait(Native Method)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:189)
Thread 2 (Reference Handler):
State: WAITING
Blocked count: 5
Waited count: 4
Waiting on java.lang.ref.Reference$Lock@7cd8af5c
Stack:
java.lang.Object.wait(Native Method)
java.lang.Object.wait(Object.java:503)
java.lang.ref.Reference$ReferenceHandler.run(Reference.java:133)
Thread 1 (main):
State: RUNNABLE
Blocked count: 93
Waited count: 506
Stack:
sun.management.ThreadImpl.getThreadInfo1(Native Method)
sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:174)
sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139)
org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:165)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.hadoop.hbase.util.Threads.printThreadInfo(Threads.java:302)
org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:223)
org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:437)
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:224)
org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:93)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1008)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:968)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:842)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:836)
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:780)
org.apache.hadoop.hbase.security.access.TestAccessController.setupBeforeClass(TestAccessController.java:191)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Standard Error
2015-03-21 02:10:53,725 WARN [main] util.NativeCodeLoader(62): Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
2015-03-21 02:10:54,071 INFO [main] hbase.HBaseTestingUtility(946): Starting
up minicluster with 1 master(s) and 1 regionserver(s) and 1 datanode(s)
2015-03-21 02:10:54,156 INFO [main] hbase.HBaseTestingUtility(436): Created
new mini-cluster data directory:
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa,
deleteOnExit=true
2015-03-21 02:10:54,158 INFO [main] hbase.HBaseTestingUtility(676): Setting
test.cache.data to
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/cache_data
in system properties and HBase conf
2015-03-21 02:10:54,159 INFO [main] hbase.HBaseTestingUtility(676): Setting
hadoop.tmp.dir to
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/hadoop_tmp
in system properties and HBase conf
2015-03-21 02:10:54,160 INFO [main] hbase.HBaseTestingUtility(676): Setting
hadoop.log.dir to
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/hadoop_logs
in system properties and HBase conf
2015-03-21 02:10:54,161 INFO [main] hbase.HBaseTestingUtility(676): Setting
mapreduce.cluster.local.dir to
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/mapred_local
in system properties and HBase conf
2015-03-21 02:10:54,162 INFO [main] hbase.HBaseTestingUtility(676): Setting
mapreduce.cluster.temp.dir to
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/mapred_temp
in system properties and HBase conf
2015-03-21 02:10:54,167 INFO [main] hbase.HBaseTestingUtility(667): read short
circuit is OFF
2015-03-21 02:10:54,519 DEBUG [main] fs.HFileSystem(221): The file system is
not a DistributedFileSystem. Skipping on block location reordering
2015-03-21 02:10:57,128 WARN [main] impl.MetricsConfig(124): Cannot locate
configuration: tried
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
2015-03-21 02:10:57,371 INFO [main] log.Slf4jLog(67): Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-03-21 02:10:57,457 INFO [main] log.Slf4jLog(67): jetty-6.1.26
2015-03-21 02:10:57,501 INFO [main] log.Slf4jLog(67): Extract
jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.5.1/hadoop-hdfs-2.5.1-tests.jar!/webapps/hdfs
to /tmp/Jetty_localhost_46463_hdfs____.smgs9z/webapp
2015-03-21 02:10:57,923 WARN [main] server.AuthenticationFilter(166):
'signature.secret' configuration not set, using a random value as secret
2015-03-21 02:10:57,997 INFO [main] log.Slf4jLog(67): Started
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46463
2015-03-21 02:10:59,540 INFO [main] log.Slf4jLog(67): jetty-6.1.26
2015-03-21 02:10:59,562 INFO [main] log.Slf4jLog(67): Extract
jar:file:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.5.1/hadoop-hdfs-2.5.1-tests.jar!/webapps/datanode
to /tmp/Jetty_localhost_37974_datanode____72s1cp/webapp
2015-03-21 02:10:59,764 INFO [main] log.Slf4jLog(67): Started
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37974
2015-03-21 02:11:01,200 INFO [IPC Server handler 7 on 42912]
blockmanagement.BlockManager(1753): BLOCK* processReport: from storage
DS-20704df4-ae22-40c7-8dcd-b411672f6980 node DatanodeRegistration(127.0.0.1,
datanodeUuid=bf0194c5-f830-4a3c-a9e4-a3e5516d3196, infoPort=37974,
ipcPort=34438, storageInfo=lv=-55;cid=testClusterID;nsid=1057254946;c=0),
blocks: 0, hasStaleStorages: true, processing time: 3 msecs
2015-03-21 02:11:01,200 INFO [IPC Server handler 7 on 42912]
blockmanagement.BlockManager(1753): BLOCK* processReport: from storage
DS-a2d75141-06d2-4e32-a0f7-999efc9c94fc node DatanodeRegistration(127.0.0.1,
datanodeUuid=bf0194c5-f830-4a3c-a9e4-a3e5516d3196, infoPort=37974,
ipcPort=34438, storageInfo=lv=-55;cid=testClusterID;nsid=1057254946;c=0),
blocks: 0, hasStaleStorages: false, processing time: 0 msecs
2015-03-21 02:11:01,374 INFO [main] zookeeper.MiniZooKeeperCluster(273):
Started MiniZooKeeperCluster and ran successful 'stat' on client port=55239
2015-03-21 02:11:01,409 INFO [main] fs.HFileSystem(252): Added intercepting
call to namenode#getBlockLocations so can do block reordering using class class
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-03-21 02:11:01,418 INFO [main] fs.HFileSystem(252): Added intercepting
call to namenode#getBlockLocations so can do block reordering using class class
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-03-21 02:11:02,029 INFO [IPC Server handler 8 on 42912]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:54374 is added to
blk_1073741825_1001{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-20704df4-ae22-40c7-8dcd-b411672f6980:NORMAL|RBW]]}
size 0
2015-03-21 02:11:02,074 INFO [main] util.FSUtils(712): Created version file at
hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6
with version=8
2015-03-21 02:11:02,354 INFO [main] client.ConnectionUtils(105):
master/proserpina.apache.org/67.195.81.189:0 server-side HConnection retries=350
2015-03-21 02:11:02,728 INFO [main] ipc.SimpleRpcScheduler(128): Using
deadline as user call queue, count=1
2015-03-21 02:11:02,829 INFO [main] ipc.RpcServer$Listener(567):
master/proserpina.apache.org/67.195.81.189:0: started 10 reader(s).
2015-03-21 02:11:02,870 INFO [main] fs.HFileSystem(252): Added intercepting
call to namenode#getBlockLocations so can do block reordering using class class
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-03-21 02:11:03,102 INFO [main] zookeeper.RecoverableZooKeeper(121):
Process identifier=master:38530 connecting to ZooKeeper ensemble=localhost:55239
2015-03-21 02:11:03,938 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:385300x0, quorum=localhost:55239,
baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected,
path=null
2015-03-21 02:11:03,940 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(472): master:38530-0x14c3a1908520000 connected
2015-03-21 02:11:04,142 DEBUG [main] zookeeper.ZKUtil(488):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on znode that does not yet exist, /hbase/master
2015-03-21 02:11:04,144 DEBUG [main] zookeeper.ZKUtil(488):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on znode that does not yet exist, /hbase/running
2015-03-21 02:11:04,162 INFO [RpcServer.responder]
ipc.RpcServer$Responder(886): RpcServer.responder: starting
2015-03-21 02:11:04,186 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=0 queue=0
2015-03-21 02:11:04,187 INFO [RpcServer.listener,port=38530]
ipc.RpcServer$Listener(698): RpcServer.listener,port=38530: starting
2015-03-21 02:11:04,187 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=1 queue=0
2015-03-21 02:11:04,206 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=2 queue=0
2015-03-21 02:11:04,209 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=3 queue=0
2015-03-21 02:11:04,209 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=4 queue=0
2015-03-21 02:11:04,212 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=0 queue=0
2015-03-21 02:11:04,214 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=1 queue=1
2015-03-21 02:11:04,215 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=2 queue=0
2015-03-21 02:11:04,215 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=3 queue=1
2015-03-21 02:11:04,216 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=4 queue=0
2015-03-21 02:11:04,233 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=5 queue=1
2015-03-21 02:11:04,239 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=6 queue=0
2015-03-21 02:11:04,244 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=7 queue=1
2015-03-21 02:11:04,250 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=8 queue=0
2015-03-21 02:11:04,255 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=9 queue=1
2015-03-21 02:11:04,256 DEBUG [main] ipc.RpcExecutor(114): Replication Start
Handler index=0 queue=0
2015-03-21 02:11:04,257 DEBUG [main] ipc.RpcExecutor(114): Replication Start
Handler index=1 queue=0
2015-03-21 02:11:04,257 DEBUG [main] ipc.RpcExecutor(114): Replication Start
Handler index=2 queue=0
2015-03-21 02:11:04,268 INFO [main] http.HttpRequestLog(69): Http request log
for http.requests.master is not defined
2015-03-21 02:11:04,273 INFO [main] http.HttpServer(819): Added global filter
'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2015-03-21 02:11:04,276 INFO [main] http.HttpServer(797): Added filter
static_user_filter
(class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
to context master
2015-03-21 02:11:04,277 INFO [main] http.HttpServer(804): Added filter
static_user_filter
(class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
2015-03-21 02:11:04,277 INFO [main] http.HttpServer(804): Added filter
static_user_filter
(class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
2015-03-21 02:11:04,292 INFO [main] http.HttpServer(1011): Jetty bound to port
56508
2015-03-21 02:11:04,292 INFO [main] log.Slf4jLog(67): jetty-6.1.26
2015-03-21 02:11:04,364 INFO [main] log.Slf4jLog(67): Started
[email protected]:56508
2015-03-21 02:11:04,383 INFO [main] master.HMaster(338):
hbase.rootdir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6,
hbase.cluster.distributed=false
2015-03-21 02:11:04,400 INFO [main] master.HMaster(1423): Adding backup master
ZNode /hbase/backup-masters/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:04,432 DEBUG [main] zookeeper.ZKUtil(486):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on existing
znode=/hbase/backup-masters/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:04,499 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:38530-0x14c3a1908520000,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeCreated, state=SyncConnected, path=/hbase/master
2015-03-21 02:11:04,512 DEBUG [proserpina:38530.activeMasterManager]
zookeeper.ZKUtil(486): master:38530-0x14c3a1908520000, quorum=localhost:55239,
baseZNode=/hbase Set watcher on existing znode=/hbase/master
2015-03-21 02:11:04,515 INFO [proserpina:38530.activeMasterManager]
master.ActiveMasterManager(170): Deleting ZNode for
/hbase/backup-masters/proserpina.apache.org,38530,1426903862856 from backup
master directory
2015-03-21 02:11:04,533 DEBUG [main-EventThread] zookeeper.ZKUtil(486):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on existing znode=/hbase/master
2015-03-21 02:11:04,533 DEBUG [main-EventThread]
master.ActiveMasterManager(126): A master is now available
2015-03-21 02:11:04,541 WARN [proserpina:38530.activeMasterManager]
hbase.ZNodeClearer(58): Environment variable HBASE_ZNODE_FILE not set; znodes
will not be cleared on crash by start scripts (Longer MTTR!)
2015-03-21 02:11:04,542 INFO [proserpina:38530.activeMasterManager]
master.ActiveMasterManager(179): Registered Active
Master=proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:04,542 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:38530-0x14c3a1908520000,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeDeleted, state=SyncConnected,
path=/hbase/backup-masters/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:04,657 INFO [main] client.ConnectionUtils(105):
regionserver/proserpina.apache.org/67.195.81.189:0 server-side HConnection
retries=350
2015-03-21 02:11:04,658 INFO [main] ipc.SimpleRpcScheduler(128): Using
deadline as user call queue, count=1
2015-03-21 02:11:04,681 INFO [main] ipc.RpcServer$Listener(567):
regionserver/proserpina.apache.org/67.195.81.189:0: started 10 reader(s).
2015-03-21 02:11:04,692 INFO [main] fs.HFileSystem(252): Added intercepting
call to namenode#getBlockLocations so can do block reordering using class class
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-03-21 02:11:04,696 INFO [main] zookeeper.RecoverableZooKeeper(121):
Process identifier=regionserver:51019 connecting to ZooKeeper
ensemble=localhost:55239
2015-03-21 02:11:04,712 DEBUG [main] zookeeper.ZKUtil(486):
regionserver:510190x0, quorum=localhost:55239, baseZNode=/hbase Set watcher on
existing znode=/hbase/master
2015-03-21 02:11:04,716 DEBUG [main] zookeeper.ZKUtil(488):
regionserver:510190x0, quorum=localhost:55239, baseZNode=/hbase Set watcher on
znode that does not yet exist, /hbase/running
2015-03-21 02:11:04,721 INFO [RpcServer.responder]
ipc.RpcServer$Responder(886): RpcServer.responder: starting
2015-03-21 02:11:04,716 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): regionserver:510190x0, quorum=localhost:55239,
baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected,
path=null
2015-03-21 02:11:04,724 INFO [RpcServer.listener,port=51019]
ipc.RpcServer$Listener(698): RpcServer.listener,port=51019: starting
2015-03-21 02:11:04,723 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=0 queue=0
2015-03-21 02:11:04,732 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=1 queue=0
2015-03-21 02:11:04,734 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=2 queue=0
2015-03-21 02:11:04,739 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(472): regionserver:51019-0x14c3a1908520001 connected
2015-03-21 02:11:04,739 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=3 queue=0
2015-03-21 02:11:04,740 DEBUG [main] ipc.RpcExecutor(114): B.default Start
Handler index=4 queue=0
2015-03-21 02:11:04,741 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=0 queue=0
2015-03-21 02:11:04,741 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=1 queue=1
2015-03-21 02:11:04,743 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=2 queue=0
2015-03-21 02:11:04,744 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=3 queue=1
2015-03-21 02:11:04,744 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=4 queue=0
2015-03-21 02:11:04,745 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=5 queue=1
2015-03-21 02:11:04,745 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=6 queue=0
2015-03-21 02:11:04,746 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=7 queue=1
2015-03-21 02:11:04,754 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=8 queue=0
2015-03-21 02:11:04,755 DEBUG [main] ipc.RpcExecutor(114): Priority Start
Handler index=9 queue=1
2015-03-21 02:11:04,755 DEBUG [main] ipc.RpcExecutor(114): Replication Start
Handler index=0 queue=0
2015-03-21 02:11:04,756 DEBUG [main] ipc.RpcExecutor(114): Replication Start
Handler index=1 queue=0
2015-03-21 02:11:04,756 DEBUG [main] ipc.RpcExecutor(114): Replication Start
Handler index=2 queue=0
2015-03-21 02:11:04,757 INFO [main] http.HttpRequestLog(69): Http request log
for http.requests.regionserver is not defined
2015-03-21 02:11:04,759 INFO [main] http.HttpServer(819): Added global filter
'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2015-03-21 02:11:04,760 INFO [main] http.HttpServer(797): Added filter
static_user_filter
(class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
to context regionserver
2015-03-21 02:11:04,760 INFO [main] http.HttpServer(804): Added filter
static_user_filter
(class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
2015-03-21 02:11:04,761 INFO [main] http.HttpServer(804): Added filter
static_user_filter
(class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
2015-03-21 02:11:04,761 INFO [IPC Server handler 5 on 42912]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:54374 is added to blk_1073741826_1002{blockUCState=COMMITTED,
primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a2d75141-06d2-4e32-a0f7-999efc9c94fc:NORMAL|RBW]]}
size 42
2015-03-21 02:11:04,764 INFO [main] http.HttpServer(1011): Jetty bound to port
57648
2015-03-21 02:11:04,764 INFO [main] log.Slf4jLog(67): jetty-6.1.26
2015-03-21 02:11:04,801 INFO [main] log.Slf4jLog(67): Started
[email protected]:57648
2015-03-21 02:11:04,889 INFO [RS:0;proserpina:51019]
zookeeper.RecoverableZooKeeper(121): Process identifier=hconnection-0x65ff3336
connecting to ZooKeeper ensemble=localhost:55239
2015-03-21 02:11:04,890 INFO [M:0;proserpina:38530]
zookeeper.RecoverableZooKeeper(121): Process identifier=hconnection-0x557b528a
connecting to ZooKeeper ensemble=localhost:55239
2015-03-21 02:11:04,911 DEBUG [RS:0;proserpina:51019-EventThread]
zookeeper.ZooKeeperWatcher(388): hconnection-0x65ff33360x0,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event, type=None,
state=SyncConnected, path=null
2015-03-21 02:11:04,913 DEBUG [RS:0;proserpina:51019-EventThread]
zookeeper.ZooKeeperWatcher(472): hconnection-0x65ff3336-0x14c3a1908520002
connected
2015-03-21 02:11:04,944 DEBUG [M:0;proserpina:38530-EventThread]
zookeeper.ZooKeeperWatcher(388): hconnection-0x557b528a0x0,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event, type=None,
state=SyncConnected, path=null
2015-03-21 02:11:04,945 INFO [RS:0;proserpina:51019]
client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null
2015-03-21 02:11:04,945 DEBUG [RS:0;proserpina:51019]
client.ConnectionManager$HConnectionImplementation(847): clusterid came back
null, using default default-cluster
2015-03-21 02:11:04,946 DEBUG [M:0;proserpina:38530-EventThread]
zookeeper.ZooKeeperWatcher(472): hconnection-0x557b528a-0x14c3a1908520003
connected
2015-03-21 02:11:04,946 INFO [M:0;proserpina:38530]
client.ZooKeeperRegistry(104): ClusterId read in ZooKeeper is null
2015-03-21 02:11:04,948 DEBUG [M:0;proserpina:38530]
client.ConnectionManager$HConnectionImplementation(847): clusterid came back
null, using default default-cluster
2015-03-21 02:11:05,020 DEBUG [RS:0;proserpina:51019]
ipc.AbstractRpcClient(105):
Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f8429a7, compressor=null,
tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000,
writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0,
fallbackAllowed=true, bind address=null
2015-03-21 02:11:05,020 DEBUG [M:0;proserpina:38530]
ipc.AbstractRpcClient(105):
Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c0ba133, compressor=null,
tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000,
writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0,
fallbackAllowed=true, bind address=null
2015-03-21 02:11:05,021 DEBUG [RS:0;proserpina:51019] ipc.AsyncRpcClient(152):
Starting async Hbase RPC client
2015-03-21 02:11:05,021 DEBUG [M:0;proserpina:38530] ipc.AsyncRpcClient(152):
Starting async Hbase RPC client
2015-03-21 02:11:05,021 DEBUG [RS:0;proserpina:51019] ipc.AsyncRpcClient(131):
Create NioEventLoopGroup with maxThreads = 0
2015-03-21 02:11:05,106 DEBUG [RS:0;proserpina:51019] ipc.AsyncRpcClient(106):
Create global event loop group NioEventLoopGroup
2015-03-21 02:11:05,107 DEBUG [M:0;proserpina:38530] ipc.AsyncRpcClient(163):
Use global event loop group NioEventLoopGroup
2015-03-21 02:11:05,107 DEBUG [RS:0;proserpina:51019] ipc.AsyncRpcClient(163):
Use global event loop group NioEventLoopGroup
2015-03-21 02:11:05,173 DEBUG [proserpina:38530.activeMasterManager]
util.FSUtils(864): Created cluster ID file at
hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/hbase.id
with ID: 9500695b-0220-487d-9d12-037e9eed7bc7
2015-03-21 02:11:05,236 INFO [proserpina:38530.activeMasterManager]
master.MasterFileSystem(507): BOOTSTRAP: creating hbase:meta region
2015-03-21 02:11:05,241 INFO [proserpina:38530.activeMasterManager]
regionserver.HRegion(6001): creating HRegion hbase:meta HTD == 'hbase:meta',
{TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 =>
'|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'},
{NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE',
REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL =>
'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS
=> 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'},
{NAME => 'table', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE',
REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL =>
'FOREVER', MIN_VERSIONS => '0', CACHE_DATA_IN_L1 => 'true', KEEP_DELETED_CELLS
=> 'FALSE', BLOCKSIZE => '8192', IN_MEMORY => 'true', BLOCKCACHE => 'true'}
RootDir =
hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6
Table name == hbase:meta
2015-03-21 02:11:05,303 INFO [IPC Server handler 8 on 42912]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:54374 is added to
blk_1073741827_1003{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-20704df4-ae22-40c7-8dcd-b411672f6980:NORMAL|RBW]]}
size 0
2015-03-21 02:11:05,312 DEBUG [proserpina:38530.activeMasterManager]
regionserver.HRegion(770): Instantiated hbase:meta,,1.1588230740
2015-03-21 02:11:05,469 INFO [StoreOpener-1588230740-1]
hfile.CacheConfig(470): Allocating LruBlockCache size=995.60 MB, blockSize=64 KB
2015-03-21 02:11:05,498 INFO [StoreOpener-1588230740-1]
hfile.CacheConfig(260): blockCache=LruBlockCache{blockCount=0,
currentSize=1071760, freeSize=1042890544, maxSize=1043962304, heapSize=1071760,
minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5,
singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=false,
cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2015-03-21 02:11:05,510 INFO [StoreOpener-1588230740-1]
compactions.CompactionConfiguration(97): size [134217728, 9223372036854775807);
files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point
2684354560; major period 604800000, major jitter 0.500000, min locality to
compact 0.000000
2015-03-21 02:11:05,519 DEBUG [StoreOpener-1588230740-1]
regionserver.HRegionFileSystem(201): No StoreFiles for:
hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/data/hbase/meta/1588230740/info
2015-03-21 02:11:05,531 DEBUG [StoreOpener-1588230740-1]
util.ChecksumType$2(70): org.apache.hadoop.util.PureJavaCrc32 available
2015-03-21 02:11:05,531 DEBUG [StoreOpener-1588230740-1]
util.ChecksumType$3(113): org.apache.hadoop.util.PureJavaCrc32C available
2015-03-21 02:11:05,539 INFO [StoreOpener-1588230740-1]
hfile.CacheConfig(260): blockCache=LruBlockCache{blockCount=0,
currentSize=1071760, freeSize=1042890544, maxSize=1043962304, heapSize=1071760,
minSize=991764160, minFactor=0.95, multiSize=495882080, multiFactor=0.5,
singleSize=247941040, singleFactor=0.25}, cacheDataOnRead=true,
cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false,
cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2015-03-21 02:11:05,542 INFO [StoreOpener-1588230740-1]
compactions.CompactionConfiguration(97): size [134217728, 9223372036854775807);
files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point
2684354560; major period 604800000, major jitter 0.500000, min locality to
compact 0.000000
2015-03-21 02:11:05,547 DEBUG [StoreOpener-1588230740-1]
regionserver.HRegionFileSystem(201): No StoreFiles for:
hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/data/hbase/meta/1588230740/table
2015-03-21 02:11:05,558 DEBUG [proserpina:38530.activeMasterManager]
regionserver.HRegion(3910): Found 0 recovered edits file(s) under
hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/data/hbase/meta/1588230740
2015-03-21 02:11:05,581 DEBUG [proserpina:38530.activeMasterManager]
regionserver.FlushLargeStoresPolicy(56):
hbase.hregion.percolumnfamilyflush.size.lower.bound is not specified, use
global config(16777216) instead
2015-03-21 02:11:05,593 DEBUG [proserpina:38530.activeMasterManager]
wal.WALSplitter(712): Wrote region
seqId=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/data/hbase/meta/1588230740/recovered.edits/2.seqid
to file, newSeqId=2, maxSeqId=0
2015-03-21 02:11:05,593 INFO [proserpina:38530.activeMasterManager]
regionserver.HRegion(898): Onlined 1588230740; next sequenceid=2
2015-03-21 02:11:05,594 DEBUG [proserpina:38530.activeMasterManager]
regionserver.HRegion(1357): Closing hbase:meta,,1.1588230740: disabling
compactions & flushes
2015-03-21 02:11:05,594 DEBUG [proserpina:38530.activeMasterManager]
regionserver.HRegion(1384): Updates disabled for region hbase:meta,,1.1588230740
2015-03-21 02:11:05,609 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1]
regionserver.HStore(879): Closed info
2015-03-21 02:11:05,610 INFO [StoreCloserThread-hbase:meta,,1.1588230740-1]
regionserver.HStore(879): Closed table
2015-03-21 02:11:05,610 INFO [proserpina:38530.activeMasterManager]
regionserver.HRegion(1476): Closed hbase:meta,,1.1588230740
2015-03-21 02:11:05,677 INFO [IPC Server handler 7 on 42912]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:54374 is added to
blk_1073741828_1004{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a2d75141-06d2-4e32-a0f7-999efc9c94fc:NORMAL|RBW]]}
size 0
2015-03-21 02:11:05,693 DEBUG [proserpina:38530.activeMasterManager]
util.FSTableDescriptors(718): Wrote descriptor into:
hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/data/hbase/meta/.tabledesc/.tableinfo.0000000001
2015-03-21 02:11:05,720 INFO [proserpina:38530.activeMasterManager]
fs.HFileSystem(252): Added intercepting call to namenode#getBlockLocations so
can do block reordering using class class
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2015-03-21 02:11:05,728 DEBUG [proserpina:38530.activeMasterManager]
coordination.ZKSplitLogManagerCoordination(886): Distributed log replay=true
2015-03-21 02:11:05,736 INFO [proserpina:38530.activeMasterManager]
coordination.ZKSplitLogManagerCoordination(599): Found 0 orphan tasks and 0
rescan nodes
2015-03-21 02:11:05,738 DEBUG [proserpina:38530.activeMasterManager]
util.FSTableDescriptors(222): Fetching table descriptors from the filesystem.
2015-03-21 02:11:05,772 INFO [proserpina:38530.activeMasterManager]
zookeeper.RecoverableZooKeeper(121): Process identifier=hconnection-0x28f86af6
connecting to ZooKeeper ensemble=localhost:55239
2015-03-21 02:11:05,795 DEBUG
[proserpina:38530.activeMasterManager-EventThread]
zookeeper.ZooKeeperWatcher(388): hconnection-0x28f86af60x0,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event, type=None,
state=SyncConnected, path=null
2015-03-21 02:11:05,796 DEBUG
[proserpina:38530.activeMasterManager-EventThread]
zookeeper.ZooKeeperWatcher(472): hconnection-0x28f86af6-0x14c3a1908520004
connected
2015-03-21 02:11:05,802 DEBUG [proserpina:38530.activeMasterManager]
ipc.AbstractRpcClient(105):
Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@38c94d89, compressor=null,
tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000,
writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0,
fallbackAllowed=true, bind address=null
2015-03-21 02:11:05,803 DEBUG [proserpina:38530.activeMasterManager]
ipc.AsyncRpcClient(152): Starting async Hbase RPC client
2015-03-21 02:11:05,804 DEBUG [proserpina:38530.activeMasterManager]
ipc.AsyncRpcClient(163): Use global event loop group NioEventLoopGroup
2015-03-21 02:11:05,826 INFO [proserpina:38530.activeMasterManager]
balancer.StochasticLoadBalancer(135): loading config
2015-03-21 02:11:05,846 DEBUG [proserpina:38530.activeMasterManager]
zookeeper.ZKUtil(488): master:38530-0x14c3a1908520000, quorum=localhost:55239,
baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer
2015-03-21 02:11:05,894 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:38530-0x14c3a1908520000,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeCreated, state=SyncConnected, path=/hbase/running
2015-03-21 02:11:05,895 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): regionserver:51019-0x14c3a1908520001,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeCreated, state=SyncConnected, path=/hbase/running
2015-03-21 02:11:05,896 INFO [proserpina:38530.activeMasterManager]
master.HMaster(548): Server active/primary
master=proserpina.apache.org,38530,1426903862856, sessionid=0x14c3a1908520000,
setting cluster-up flag (Was=false)
2015-03-21 02:11:05,898 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(745): ClusterId :
9500695b-0220-487d-9d12-037e9eed7bc7
2015-03-21 02:11:05,912 DEBUG [M:0;proserpina:38530]
procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is
initializing
2015-03-21 02:11:05,899 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(745): ClusterId :
9500695b-0220-487d-9d12-037e9eed7bc7
2015-03-21 02:11:05,913 DEBUG [RS:0;proserpina:51019]
procedure.RegionServerProcedureManagerHost(44): Procedure online-snapshot is
initializing
2015-03-21 02:11:05,961 DEBUG [M:0;proserpina:38530]
zookeeper.RecoverableZooKeeper(531): Node /hbase/online-snapshot already exists
2015-03-21 02:11:05,977 DEBUG [RS:0;proserpina:51019]
zookeeper.RecoverableZooKeeper(531): Node /hbase/online-snapshot/acquired
already exists
2015-03-21 02:11:05,994 DEBUG [M:0;proserpina:38530]
zookeeper.RecoverableZooKeeper(531): Node /hbase/online-snapshot/acquired
already exists
2015-03-21 02:11:06,044 DEBUG [proserpina:38530.activeMasterManager]
zookeeper.RecoverableZooKeeper(531): Node /hbase/online-snapshot/reached
already exists
2015-03-21 02:11:06,070 INFO [proserpina:38530.activeMasterManager]
procedure.ZKProcedureUtil(271): Clearing all procedure znodes:
/hbase/online-snapshot/acquired /hbase/online-snapshot/reached
/hbase/online-snapshot/abort
2015-03-21 02:11:06,070 DEBUG [M:0;proserpina:38530]
procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is
initialized
2015-03-21 02:11:06,070 DEBUG [RS:0;proserpina:51019]
procedure.RegionServerProcedureManagerHost(46): Procedure online-snapshot is
initialized
2015-03-21 02:11:06,071 DEBUG [M:0;proserpina:38530]
procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is
initializing
2015-03-21 02:11:06,071 DEBUG [RS:0;proserpina:51019]
procedure.RegionServerProcedureManagerHost(44): Procedure flush-table-proc is
initializing
2015-03-21 02:11:06,136 DEBUG [RS:0;proserpina:51019]
zookeeper.RecoverableZooKeeper(531): Node /hbase/flush-table-proc already exists
2015-03-21 02:11:06,136 DEBUG [proserpina:38530.activeMasterManager]
procedure.ZKProcedureCoordinatorRpcs(238): Starting the controller for
procedure member:proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,169 DEBUG [RS:0;proserpina:51019]
zookeeper.RecoverableZooKeeper(531): Node /hbase/flush-table-proc/acquired
already exists
2015-03-21 02:11:06,169 DEBUG [proserpina:38530.activeMasterManager]
zookeeper.RecoverableZooKeeper(531): Node /hbase/flush-table-proc/acquired
already exists
2015-03-21 02:11:06,181 DEBUG [RS:0;proserpina:51019]
zookeeper.RecoverableZooKeeper(531): Node /hbase/flush-table-proc/reached
already exists
2015-03-21 02:11:06,189 DEBUG [proserpina:38530.activeMasterManager]
zookeeper.RecoverableZooKeeper(531): Node /hbase/flush-table-proc/reached
already exists
2015-03-21 02:11:06,198 INFO [proserpina:38530.activeMasterManager]
procedure.ZKProcedureUtil(271): Clearing all procedure znodes:
/hbase/flush-table-proc/acquired /hbase/flush-table-proc/reached
/hbase/flush-table-proc/abort
2015-03-21 02:11:06,198 DEBUG [M:0;proserpina:38530]
zookeeper.RecoverableZooKeeper(531): Node /hbase/flush-table-proc/abort already
exists
2015-03-21 02:11:06,200 DEBUG [RS:0;proserpina:51019]
procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is
initialized
2015-03-21 02:11:06,200 DEBUG [M:0;proserpina:38530]
procedure.RegionServerProcedureManagerHost(46): Procedure flush-table-proc is
initialized
2015-03-21 02:11:06,204 DEBUG [proserpina:38530.activeMasterManager]
procedure.ZKProcedureCoordinatorRpcs(238): Starting the controller for
procedure member:proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,215 INFO [M:0;proserpina:38530]
regionserver.MemStoreFlusher(124): globalMemStoreLimit=995.6 M,
globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G
2015-03-21 02:11:06,219 INFO [RS:0;proserpina:51019]
regionserver.MemStoreFlusher(124): globalMemStoreLimit=995.6 M,
globalMemStoreLimitLowMark=945.8 M, maxHeap=2.4 G
2015-03-21 02:11:06,229 INFO [M:0;proserpina:38530]
compactions.PressureAwareCompactionThroughputController(175): Compaction
throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00
MB/sec, off peak: unlimited, tuning period: 60000 ms
2015-03-21 02:11:06,229 INFO [RS:0;proserpina:51019]
compactions.PressureAwareCompactionThroughputController(175): Compaction
throughput configurations, higher bound: 20.00 MB/sec, lower bound 10.00
MB/sec, off peak: unlimited, tuning period: 60000 ms
2015-03-21 02:11:06,232 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer$CompactionChecker(1473): CompactionChecker runs
every 1sec
2015-03-21 02:11:06,236 DEBUG [RS:0;proserpina:51019]
ipc.AbstractRpcClient(105):
Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@190f51a, compressor=null,
tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000,
writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0,
fallbackAllowed=true, bind address=proserpina.apache.org/67.195.81.189:0
2015-03-21 02:11:06,236 DEBUG [RS:0;proserpina:51019] ipc.AsyncRpcClient(152):
Starting async Hbase RPC client
2015-03-21 02:11:06,237 DEBUG [RS:0;proserpina:51019] ipc.AsyncRpcClient(163):
Use global event loop group NioEventLoopGroup
2015-03-21 02:11:06,241 INFO [M:0;proserpina:38530]
regionserver.HRegionServer$CompactionChecker(1473): CompactionChecker runs
every 1sec
2015-03-21 02:11:06,242 DEBUG [M:0;proserpina:38530]
ipc.AbstractRpcClient(105):
Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@181dbfc, compressor=null,
tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000,
writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0,
fallbackAllowed=true, bind address=proserpina.apache.org/67.195.81.189:0
2015-03-21 02:11:06,242 DEBUG [M:0;proserpina:38530] ipc.AsyncRpcClient(152):
Starting async Hbase RPC client
2015-03-21 02:11:06,242 DEBUG [M:0;proserpina:38530] ipc.AsyncRpcClient(163):
Use global event loop group NioEventLoopGroup
2015-03-21 02:11:06,243 DEBUG [RS:0;proserpina:51019]
regionserver.ShutdownHook(87): Installed shutdown hook thread:
Shutdownhook:RS:0;proserpina:51019
2015-03-21 02:11:06,247 DEBUG [M:0;proserpina:38530]
regionserver.ShutdownHook(87): Installed shutdown hook thread:
Shutdownhook:M:0;proserpina:38530
2015-03-21 02:11:06,256 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:38530-0x14c3a1908520000,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs
2015-03-21 02:11:06,259 DEBUG [RS:0;proserpina:51019] zookeeper.ZKUtil(486):
regionserver:51019-0x14c3a1908520001, quorum=localhost:55239, baseZNode=/hbase
Set watcher on existing
znode=/hbase/rs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,262 DEBUG [M:0;proserpina:38530] zookeeper.ZKUtil(486):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on existing znode=/hbase/rs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,261 DEBUG [main-EventThread] zookeeper.ZKUtil(486):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on existing znode=/hbase/rs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,265 DEBUG [main-EventThread] zookeeper.ZKUtil(486):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on existing znode=/hbase/rs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,267 INFO [proserpina:38530.activeMasterManager]
master.MasterCoprocessorHost(86): System coprocessor loading is enabled
2015-03-21 02:11:06,270 DEBUG [main-EventThread]
zookeeper.RegionServerTracker(93): Added tracking of RS
/hbase/rs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,271 DEBUG [main-EventThread]
zookeeper.RegionServerTracker(93): Added tracking of RS
/hbase/rs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,273 INFO [RS:0;proserpina:51019]
regionserver.RegionServerCoprocessorHost(65): System coprocessor loading is
enabled
2015-03-21 02:11:06,276 INFO [RS:0;proserpina:51019]
regionserver.RegionServerCoprocessorHost(66): Table coprocessor loading is
enabled
2015-03-21 02:11:06,273 INFO [M:0;proserpina:38530]
regionserver.RegionServerCoprocessorHost(65): System coprocessor loading is
enabled
2015-03-21 02:11:06,277 INFO [M:0;proserpina:38530]
regionserver.RegionServerCoprocessorHost(66): Table coprocessor loading is
enabled
2015-03-21 02:11:06,314 DEBUG [proserpina:38530.activeMasterManager]
master.HMaster(1994): Registered master coprocessor service:
service=AccessControlService
2015-03-21 02:11:06,317 DEBUG [M:0;proserpina:38530] zookeeper.ZKUtil(488):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on znode that does not yet exist, /hbase/acl
2015-03-21 02:11:06,318 INFO [M:0;proserpina:38530]
coprocessor.CoprocessorHost(156): System coprocessor
org.apache.hadoop.hbase.security.access.AccessController was loaded
successfully with priority (536870911).
2015-03-21 02:11:06,318 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(2209): reportForDuty to
master=proserpina.apache.org,38530,1426903862856 with port=38530,
startcode=1426903862856
2015-03-21 02:11:06,336 DEBUG [RS:0;proserpina:51019] zookeeper.ZKUtil(488):
regionserver:51019-0x14c3a1908520001, quorum=localhost:55239, baseZNode=/hbase
Set watcher on znode that does not yet exist, /hbase/acl
2015-03-21 02:11:06,337 INFO [RS:0;proserpina:51019]
coprocessor.CoprocessorHost(156): System coprocessor
org.apache.hadoop.hbase.security.access.AccessController was loaded
successfully with priority (536870911).
2015-03-21 02:11:06,335 INFO [proserpina:38530.activeMasterManager]
coprocessor.CoprocessorHost(156): System coprocessor
org.apache.hadoop.hbase.security.access.AccessController was loaded
successfully with priority (536870911).
2015-03-21 02:11:06,338 INFO [proserpina:38530.activeMasterManager]
coprocessor.CoprocessorHost(156): System coprocessor
org.apache.hadoop.hbase.security.access.SecureTestUtil$MasterSyncObserver was
loaded successfully with priority (536870912).
2015-03-21 02:11:06,340 DEBUG [M:0;proserpina:38530]
regionserver.HRegionServer(2225): Master is not running yet
2015-03-21 02:11:06,340 WARN [M:0;proserpina:38530]
regionserver.HRegionServer(871): reportForDuty failed; sleeping and then
retrying.
2015-03-21 02:11:06,340 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(2209): reportForDuty to
master=proserpina.apache.org,38530,1426903862856 with port=51019,
startcode=1426903864687
2015-03-21 02:11:06,342 DEBUG [proserpina:38530.activeMasterManager]
executor.ExecutorService(91): Starting executor service
name=MASTER_OPEN_REGION-proserpina:38530, corePoolSize=5, maxPoolSize=5
2015-03-21 02:11:06,342 DEBUG [proserpina:38530.activeMasterManager]
executor.ExecutorService(91): Starting executor service
name=MASTER_CLOSE_REGION-proserpina:38530, corePoolSize=5, maxPoolSize=5
2015-03-21 02:11:06,343 DEBUG [proserpina:38530.activeMasterManager]
executor.ExecutorService(91): Starting executor service
name=MASTER_SERVER_OPERATIONS-proserpina:38530, corePoolSize=5, maxPoolSize=5
2015-03-21 02:11:06,343 DEBUG [proserpina:38530.activeMasterManager]
executor.ExecutorService(91): Starting executor service
name=MASTER_META_SERVER_OPERATIONS-proserpina:38530, corePoolSize=5,
maxPoolSize=5
2015-03-21 02:11:06,343 DEBUG [proserpina:38530.activeMasterManager]
executor.ExecutorService(91): Starting executor service
name=M_LOG_REPLAY_OPS-proserpina:38530, corePoolSize=10, maxPoolSize=10
2015-03-21 02:11:06,344 DEBUG [proserpina:38530.activeMasterManager]
executor.ExecutorService(91): Starting executor service
name=MASTER_TABLE_OPERATIONS-proserpina:38530, corePoolSize=1, maxPoolSize=1
2015-03-21 02:11:06,346 DEBUG [proserpina:38530.activeMasterManager]
cleaner.CleanerChore(91): initialize
cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner
2015-03-21 02:11:06,347 INFO [proserpina:38530.activeMasterManager]
zookeeper.RecoverableZooKeeper(121): Process identifier=replicationLogCleaner
connecting to ZooKeeper ensemble=localhost:55239
2015-03-21 02:11:06,361 DEBUG
[proserpina:38530.activeMasterManager-EventThread]
zookeeper.ZooKeeperWatcher(388): replicationLogCleaner0x0,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event, type=None,
state=SyncConnected, path=null
2015-03-21 02:11:06,367 DEBUG
[proserpina:38530.activeMasterManager-EventThread]
zookeeper.ZooKeeperWatcher(472): replicationLogCleaner-0x14c3a1908520005
connected
2015-03-21 02:11:06,436 DEBUG [proserpina:38530.activeMasterManager]
cleaner.CleanerChore(91): initialize
cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner
2015-03-21 02:11:06,449 DEBUG [proserpina:38530.activeMasterManager]
cleaner.CleanerChore(91): initialize
cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotLogCleaner
2015-03-21 02:11:06,452 DEBUG [proserpina:38530.activeMasterManager]
cleaner.CleanerChore(91): initialize
cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner
2015-03-21 02:11:06,457 DEBUG [proserpina:38530.activeMasterManager]
cleaner.CleanerChore(91): initialize
cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner
2015-03-21 02:11:06,459 DEBUG [proserpina:38530.activeMasterManager]
cleaner.CleanerChore(91): initialize
cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner
2015-03-21 02:11:06,460 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(2209): reportForDuty to
master=proserpina.apache.org,38530,1426903862856 with port=38530,
startcode=1426903862856
2015-03-21 02:11:06,468 INFO [proserpina:38530.activeMasterManager]
master.ServerManager(979): Waiting for region servers count to settle;
currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 1,
timeout of 4500 ms, interval of 1500 ms.
2015-03-21 02:11:06,485 INFO [M:0;proserpina:38530] master.ServerManager(444):
Registering server=proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,519 INFO [proserpina:38530.activeMasterManager]
master.ServerManager(996): Finished waiting for region servers count to settle;
checked in 1, slept for 60 ms, expecting minimum of 1, maximum of 1, master is
running
2015-03-21 02:11:06,523 INFO [proserpina:38530.activeMasterManager]
master.ServerManager(444): Registering
server=proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,524 INFO [proserpina:38530.activeMasterManager]
master.HMaster(645): Registered server found up in zk but who has not yet
reported in: proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,526 DEBUG [RpcServer.listener,port=38530]
ipc.RpcServer$Listener(809): RpcServer.listener,port=38530: connection from
67.195.81.189:42362; # active connections: 1
2015-03-21 02:11:06,527 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(1303): Config from master:
hbase.rootdir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6
2015-03-21 02:11:06,527 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(1303): Config from master:
fs.defaultFS=hdfs://localhost:42912
2015-03-21 02:11:06,527 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(1303): Config from master:
hbase.master.info.port=56508
2015-03-21 02:11:06,527 WARN [M:0;proserpina:38530] hbase.ZNodeClearer(58):
Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on
crash by start scripts (Longer MTTR!)
2015-03-21 02:11:06,528 INFO [M:0;proserpina:38530] hfile.CacheConfig(260):
blockCache=LruBlockCache{blockCount=0, currentSize=1071760,
freeSize=1042890544, maxSize=1043962304, heapSize=1071760, minSize=991764160,
minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040,
singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false,
cacheDataCompressed=false, prefetchOnOpen=false
2015-03-21 02:11:06,531 DEBUG [M:0;proserpina:38530]
regionserver.HRegionServer(1570):
logdir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/WALs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,536 DEBUG [proserpina:38530.activeMasterManager]
zookeeper.ZKUtil(745): master:38530-0x14c3a1908520000, quorum=localhost:55239,
baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because
node does not exist (not an error)
2015-03-21 02:11:06,588 DEBUG [M:0;proserpina:38530]
regionserver.Replication(142): ReplicationStatisticsThread 300
2015-03-21 02:11:06,604 INFO [M:0;proserpina:38530] wal.WALFactory(143):
Instantiating WALProvider of type class
org.apache.hadoop.hbase.wal.DefaultWALProvider
2015-03-21 02:11:06,605 DEBUG [AsyncRpcChannel-pool2-t1]
ipc.AsyncRpcChannel(487): Use SIMPLE authentication for service
RegionServerStatusService, sasl=false
2015-03-21 02:11:06,642 INFO [M:0;proserpina:38530] wal.FSHLog(549): WAL
configuration: blocksize=128 MB, rollsize=121.60 MB,
prefix=proserpina.apache.org%2C38530%2C1426903862856.default, suffix=,
logDir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/WALs/proserpina.apache.org,38530,1426903862856,
archiveDir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/oldWALs
2015-03-21 02:11:06,679 INFO
[RpcServer.reader=1,bindAddress=proserpina.apache.org,port=38530]
ipc.RpcServer$Connection(1640): Connection from 67.195.81.189 port: 42362 with
version info: version: "2.0.0-SNAPSHOT" url:
"git://proserpina.apache.org/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build"
revision: "09151911679a608db31cca63c25cfd52818a311c" user: "jenkins" date:
"Sat Mar 21 00:59:39 UTC 2015" src_checksum: "418b9105ad07f6f87c0e478355cabaa1"
2015-03-21 02:11:06,713 INFO [M:0;proserpina:38530] wal.FSHLog(1494): Slow
sync cost: 30 ms, current pipeline: []
2015-03-21 02:11:06,715 INFO [M:0;proserpina:38530] wal.FSHLog(971): New WAL
/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/WALs/proserpina.apache.org,38530,1426903862856/proserpina.apache.org%2C38530%2C1426903862856.default.1426903866643
2015-03-21 02:11:06,715 INFO [B.defaultRpcServer.handler=2,queue=0,port=38530]
master.ServerManager(444): Registering
server=proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,740 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(1303): Config from master:
hbase.rootdir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6
2015-03-21 02:11:06,745 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(1303): Config from master:
fs.defaultFS=hdfs://localhost:42912
2015-03-21 02:11:06,745 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(1303): Config from master:
hbase.master.info.port=56508
2015-03-21 02:11:06,746 WARN [RS:0;proserpina:51019] hbase.ZNodeClearer(58):
Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on
crash by start scripts (Longer MTTR!)
2015-03-21 02:11:06,746 INFO [RS:0;proserpina:51019] hfile.CacheConfig(260):
blockCache=LruBlockCache{blockCount=0, currentSize=1071760,
freeSize=1042890544, maxSize=1043962304, heapSize=1071760, minSize=991764160,
minFactor=0.95, multiSize=495882080, multiFactor=0.5, singleSize=247941040,
singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false,
cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false,
cacheDataCompressed=false, prefetchOnOpen=false
2015-03-21 02:11:06,747 DEBUG [RS:0;proserpina:51019]
regionserver.HRegionServer(1570):
logdir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/WALs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,793 DEBUG [RS:0;proserpina:51019]
regionserver.Replication(142): ReplicationStatisticsThread 300
2015-03-21 02:11:06,796 INFO [RS:0;proserpina:51019] wal.WALFactory(143):
Instantiating WALProvider of type class
org.apache.hadoop.hbase.wal.DefaultWALProvider
2015-03-21 02:11:06,797 INFO [M:0;proserpina:38530]
regionserver.MetricsRegionServerWrapperImpl(110): Computing regionserver
metrics every 5000 milliseconds
2015-03-21 02:11:06,810 INFO [RS:0;proserpina:51019] wal.FSHLog(549): WAL
configuration: blocksize=128 MB, rollsize=121.60 MB,
prefix=proserpina.apache.org%2C51019%2C1426903864687.default, suffix=,
logDir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/WALs/proserpina.apache.org,51019,1426903864687,
archiveDir=hdfs://localhost:42912/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/oldWALs
2015-03-21 02:11:06,816 DEBUG [M:0;proserpina:38530]
executor.ExecutorService(91): Starting executor service
name=RS_OPEN_REGION-proserpina:38530, corePoolSize=3, maxPoolSize=3
2015-03-21 02:11:06,816 DEBUG [M:0;proserpina:38530]
executor.ExecutorService(91): Starting executor service
name=RS_OPEN_META-proserpina:38530, corePoolSize=1, maxPoolSize=1
2015-03-21 02:11:06,827 DEBUG [M:0;proserpina:38530]
executor.ExecutorService(91): Starting executor service
name=RS_CLOSE_REGION-proserpina:38530, corePoolSize=3, maxPoolSize=3
2015-03-21 02:11:06,830 DEBUG [M:0;proserpina:38530]
executor.ExecutorService(91): Starting executor service
name=RS_CLOSE_META-proserpina:38530, corePoolSize=1, maxPoolSize=1
2015-03-21 02:11:06,830 DEBUG [M:0;proserpina:38530]
executor.ExecutorService(91): Starting executor service
name=RS_LOG_REPLAY_OPS-proserpina:38530, corePoolSize=2, maxPoolSize=2
2015-03-21 02:11:06,831 DEBUG [M:0;proserpina:38530]
executor.ExecutorService(91): Starting executor service
name=RS_REGION_REPLICA_FLUSH_OPS-proserpina:38530, corePoolSize=3, maxPoolSize=3
2015-03-21 02:11:06,849 DEBUG [M:0;proserpina:38530] zookeeper.ZKUtil(486):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on existing znode=/hbase/rs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,850 DEBUG [M:0;proserpina:38530] zookeeper.ZKUtil(486):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on existing znode=/hbase/rs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,850 INFO [M:0;proserpina:38530]
regionserver.ReplicationSourceManager(227): Current list of replicators:
[proserpina.apache.org,51019,1426903864687,
proserpina.apache.org,38530,1426903862856] other RSs:
[proserpina.apache.org,51019,1426903864687,
proserpina.apache.org,38530,1426903862856]
2015-03-21 02:11:06,860 INFO [RS:0;proserpina:51019] wal.FSHLog(1494): Slow
sync cost: 40 ms, current pipeline: []
2015-03-21 02:11:06,860 INFO [RS:0;proserpina:51019] wal.FSHLog(971): New WAL
/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/WALs/proserpina.apache.org,51019,1426903864687/proserpina.apache.org%2C51019%2C1426903864687.default.1426903866811
2015-03-21 02:11:06,869 INFO [RS:0;proserpina:51019]
regionserver.MetricsRegionServerWrapperImpl(110): Computing regionserver
metrics every 5000 milliseconds
2015-03-21 02:11:06,882 DEBUG [RS:0;proserpina:51019]
executor.ExecutorService(91): Starting executor service
name=RS_OPEN_REGION-proserpina:51019, corePoolSize=3, maxPoolSize=3
2015-03-21 02:11:06,884 DEBUG [RS:0;proserpina:51019]
executor.ExecutorService(91): Starting executor service
name=RS_OPEN_META-proserpina:51019, corePoolSize=1, maxPoolSize=1
2015-03-21 02:11:06,885 DEBUG [RS:0;proserpina:51019]
executor.ExecutorService(91): Starting executor service
name=RS_CLOSE_REGION-proserpina:51019, corePoolSize=3, maxPoolSize=3
2015-03-21 02:11:06,886 DEBUG [RS:0;proserpina:51019]
executor.ExecutorService(91): Starting executor service
name=RS_CLOSE_META-proserpina:51019, corePoolSize=1, maxPoolSize=1
2015-03-21 02:11:06,886 DEBUG [RS:0;proserpina:51019]
executor.ExecutorService(91): Starting executor service
name=RS_LOG_REPLAY_OPS-proserpina:51019, corePoolSize=2, maxPoolSize=2
2015-03-21 02:11:06,887 DEBUG [RS:0;proserpina:51019]
executor.ExecutorService(91): Starting executor service
name=RS_REGION_REPLICA_FLUSH_OPS-proserpina:51019, corePoolSize=3, maxPoolSize=3
2015-03-21 02:11:06,894 DEBUG [RS:0;proserpina:51019] zookeeper.ZKUtil(486):
regionserver:51019-0x14c3a1908520001, quorum=localhost:55239, baseZNode=/hbase
Set watcher on existing
znode=/hbase/rs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:06,895 DEBUG [RS:0;proserpina:51019] zookeeper.ZKUtil(486):
regionserver:51019-0x14c3a1908520001, quorum=localhost:55239, baseZNode=/hbase
Set watcher on existing
znode=/hbase/rs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:06,895 INFO [RS:0;proserpina:51019]
regionserver.ReplicationSourceManager(227): Current list of replicators:
[proserpina.apache.org,51019,1426903864687,
proserpina.apache.org,38530,1426903862856] other RSs:
[proserpina.apache.org,51019,1426903864687,
proserpina.apache.org,38530,1426903862856]
2015-03-21 02:11:06,963 INFO [M:0;proserpina:38530]
zookeeper.RecoverableZooKeeper(121): Process identifier=hconnection-0x51d528bc
connecting to ZooKeeper ensemble=localhost:55239
2015-03-21 02:11:06,964 INFO [RS:0;proserpina:51019]
zookeeper.RecoverableZooKeeper(121): Process identifier=hconnection-0x2cd18c22
connecting to ZooKeeper ensemble=localhost:55239
2015-03-21 02:11:06,967 DEBUG [RS:0;proserpina:51019]
coprocessor.CoprocessorHost(269): Stop coprocessor
org.apache.hadoop.hbase.security.access.AccessController
2015-03-21 02:11:06,967 DEBUG [M:0;proserpina:38530]
coprocessor.CoprocessorHost(269): Stop coprocessor
org.apache.hadoop.hbase.security.access.AccessController
2015-03-21 02:11:06,968 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(1819): STOPPED: Failed initialization
2015-03-21 02:11:06,968 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(1819): STOPPED: Failed initialization
2015-03-21 02:11:06,969 ERROR [M:0;proserpina:38530]
regionserver.HRegionServer(2863): Failed init
java.io.IOException: java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:239)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:217)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:118)
at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.<init>(ReplicationSink.java:90)
at
org.apache.hadoop.hbase.replication.regionserver.Replication.startReplicationService(Replication.java:213)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1684)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1323)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:874)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:237)
... 8 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:405)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:142)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:129)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:155)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:163)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:131)
at
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1617)
at
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:101)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:844)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:604)
... 13 more
2015-03-21 02:11:06,971 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(946): Stopping infoServer
2015-03-21 02:11:06,969 ERROR [RS:0;proserpina:51019]
regionserver.HRegionServer(2863): Failed init
java.io.IOException: java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:239)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:217)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:118)
at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.<init>(ReplicationSink.java:90)
at
org.apache.hadoop.hbase.replication.regionserver.Replication.startReplicationService(Replication.java:213)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1684)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1323)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:129)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:874)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:155)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:107)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:139)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:304)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:137)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:237)
... 17 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:405)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:142)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:129)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:155)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:163)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:131)
at
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1617)
at
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:101)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:844)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:604)
... 22 more
2015-03-21 02:11:06,980 FATAL [RS:0;proserpina:51019]
regionserver.HRegionServer(1994): ABORTING region server
proserpina.apache.org,51019,1426903864687: Unhandled:
java.lang.reflect.InvocationTargetException
java.io.IOException: java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:239)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:217)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:118)
at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.<init>(ReplicationSink.java:90)
at
org.apache.hadoop.hbase.replication.regionserver.Replication.startReplicationService(Replication.java:213)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1684)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1323)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:129)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:874)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:155)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:107)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:139)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:304)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:137)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:237)
... 17 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:405)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:142)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:129)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:155)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:163)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:131)
at
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1617)
at
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:101)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:844)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:604)
... 22 more
2015-03-21 02:11:06,993 FATAL [RS:0;proserpina:51019]
regionserver.HRegionServer(2002): RegionServer abort: loaded coprocessors are:
[org.apache.hadoop.hbase.security.access.SecureTestUtil$MasterSyncObserver,
org.apache.hadoop.hbase.security.access.AccessController]
2015-03-21 02:11:06,996 INFO [M:0;proserpina:38530] log.Slf4jLog(67): Stopped
[email protected]:0
2015-03-21 02:11:07,000 INFO [MemStoreFlusher.0]
regionserver.MemStoreFlusher$FlushHandler(281): MemStoreFlusher.0 exiting
2015-03-21 02:11:07,001 INFO [MemStoreFlusher.1]
regionserver.MemStoreFlusher$FlushHandler(281): MemStoreFlusher.1 exiting
2015-03-21 02:11:07,000 INFO [M:0;proserpina:38530]
snapshot.RegionServerSnapshotManager(134): Stopping RegionServerSnapshotManager
gracefully.
2015-03-21 02:11:07,001 INFO [M:0;proserpina:38530]
flush.RegionServerFlushTableProcedureManager(113): Stopping region server flush
procedure manager gracefully.
2015-03-21 02:11:07,002 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(991): stopping server
proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:07,002 DEBUG [M:0;proserpina:38530]
zookeeper.MetaTableLocator(617): Stopping MetaTableLocator
2015-03-21 02:11:07,002 INFO [M:0;proserpina:38530]
client.ConnectionManager$HConnectionImplementation(1637): Closing zookeeper
sessionid=0x14c3a1908520003
2015-03-21 02:11:07,028 DEBUG [M:0;proserpina:38530] ipc.AsyncRpcClient(288):
Stopping async HBase RPC client
2015-03-21 02:11:07,030 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(1019): stopping server
proserpina.apache.org,38530,1426903862856; all regions closed.
2015-03-21 02:11:07,030 DEBUG [M:0;proserpina:38530] wal.FSHLog(1147): Closing
WAL writer in
/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/WALs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:07,041 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(2006): Dump of metrics as JSON on abort: {
"beans" : [ {
"name" : "java.lang:type=Memory",
"modelerType" : "sun.management.MemoryImpl",
"NonHeapMemoryUsage" : {
"committed" : 54984704,
"init" : 24576000,
"max" : 318767104,
"used" : 54064320
},
"ObjectPendingFinalizationCount" : 0,
"Verbose" : false,
"HeapMemoryUsage" : {
"committed" : 957349888,
"init" : 791251328,
"max" : 2609905664,
"used" : 318592544
},
"ObjectName" : "java.lang:type=Memory"
} ],
"beans" : [ {
"name" : "Hadoop:service=HBase,name=RegionServer,sub=IPC",
"modelerType" : "RegionServer,sub=IPC",
"tag.Context" : "regionserver",
"tag.Hostname" : "proserpina.apache.org",
"queueSize" : 0,
"numCallsInGeneralQueue" : 0,
"numCallsInReplicationQueue" : 0,
"numCallsInPriorityQueue" : 0,
"numOpenConnections" : 0,
"numActiveHandler" : 0,
"QueueCallTime_num_ops" : 0,
"QueueCallTime_min" : 0,
"QueueCallTime_max" : 0,
"QueueCallTime_mean" : 0.0,
"QueueCallTime_median" : 0.0,
"QueueCallTime_75th_percentile" : 0.0,
"QueueCallTime_95th_percentile" : 0.0,
"QueueCallTime_99th_percentile" : 0.0,
"authenticationFailures" : 0,
"authorizationFailures" : 0,
"authenticationSuccesses" : 0,
"authorizationSuccesses" : 0,
"ProcessCallTime_num_ops" : 0,
"ProcessCallTime_min" : 0,
"ProcessCallTime_max" : 0,
"ProcessCallTime_mean" : 0.0,
"ProcessCallTime_median" : 0.0,
"ProcessCallTime_75th_percentile" : 0.0,
"ProcessCallTime_95th_percentile" : 0.0,
"ProcessCallTime_99th_percentile" : 0.0,
"sentBytes" : 0,
"receivedBytes" : 0
} ],
"beans" : [ {
"name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication",
"modelerType" : "RegionServer,sub=Replication",
"tag.Context" : "regionserver",
"tag.Hostname" : "proserpina.apache.org",
"sink.appliedOps" : 0,
"sink.appliedBatches" : 0,
"sink.ageOfLastAppliedOp" : 0
} ],
"beans" : [ {
"name" : "Hadoop:service=HBase,name=RegionServer,sub=Server",
"modelerType" : "RegionServer,sub=Server",
"tag.zookeeperQuorum" : "localhost:55239",
"tag.serverName" : "proserpina.apache.org,38530,1426903862856",
"tag.clusterId" : "9500695b-0220-487d-9d12-037e9eed7bc7",
"tag.Context" : "regionserver",
"tag.Hostname" : "proserpina.apache.org",
"regionCount" : 0,
"storeCount" : 0,
"hlogFileCount" : 0,
"hlogFileSize" : 0,
"storeFileCount" : 0,
"memStoreSize" : 0,
"storeFileSize" : 0,
"regionServerStartTime" : 1426903862856,
"totalRequestCount" : 0,
"readRequestCount" : 0,
"writeRequestCount" : 0,
"checkMutateFailedCount" : 0,
"checkMutatePassedCount" : 0,
"storeFileIndexSize" : 0,
"staticIndexSize" : 0,
"staticBloomSize" : 0,
"mutationsWithoutWALCount" : 0,
"mutationsWithoutWALSize" : 0,
"percentFilesLocal" : 0,
"splitQueueLength" : 0,
"compactionQueueLength" : 0,
"flushQueueLength" : 0,
"blockCacheFreeSize" : 1042890544,
"blockCacheCount" : 0,
"blockCacheSize" : 1071760,
"blockCacheHitCount" : 0,
"blockCacheMissCount" : 0,
"blockCacheEvictionCount" : 0,
"blockCacheCountHitPercent" : 0.0,
"blockCacheExpressHitPercent" : 0,
"updatesBlockedTime" : 0,
"flushedCellsCount" : 0,
"compactedCellsCount" : 0,
"majorCompactedCellsCount" : 0,
"flushedCellsSize" : 0,
"compactedCellsSize" : 0,
"majorCompactedCellsSize" : 0,
"hedgedReads" : 0,
"hedgedReadWins" : 0,
"blockedRequestCount" : 0,
"Append_num_ops" : 0,
"Append_min" : 0,
"Append_max" : 0,
"Append_mean" : 0.0,
"Append_median" : 0.0,
"Append_75th_percentile" : 0.0,
"Append_95th_percentile" : 0.0,
"Append_99th_percentile" : 0.0,
"splitSuccessCounnt" : 0,
"Delete_num_ops" : 0,
"Delete_min" : 0,
"Delete_max" : 0,
"Delete_mean" : 0.0,
"Delete_median" : 0.0,
"Delete_75th_percentile" : 0.0,
"Delete_95th_percentile" : 0.0,
"Delete_99th_percentile" : 0.0,
"Mutate_num_ops" : 0,
"Mutate_min" : 0,
"Mutate_max" : 0,
"Mutate_mean" : 0.0,
"Mutate_median" : 0.0,
"Mutate_75th_percentile" : 0.0,
"Mutate_95th_percentile" : 0.0,
"Mutate_99th_percentile" : 0.0,
"slowDeleteCount" : 0,
"splitRequestCount" : 0,
"slowIncrementCount" : 0,
"FlushTime_num_ops" : 0,
"FlushTime_min" : 0,
"FlushTime_max" : 0,
"FlushTime_mean" : 0.0,
"FlushTime_median" : 0.0,
"FlushTime_75th_percentile" : 0.0,
"FlushTime_95th_percentile" : 0.0,
"FlushTime_99th_percentile" : 0.0,
"Get_num_ops" : 0,
"Get_min" : 0,
"Get_max" : 0,
"Get_mean" : 0.0,
"Get_median" : 0.0,
"Get_75th_percentile" : 0.0,
"Get_95th_percentile" : 0.0,
"Get_99th_percentile" : 0.0,
"Replay_num_ops" : 0,
"Replay_min" : 0,
"Replay_max" : 0,
"Replay_mean" : 0.0,
"Replay_median" : 0.0,
"Replay_75th_percentile" : 0.0,
"Replay_95th_percentile" : 0.0,
"Replay_99th_percentile" : 0.0,
"slowGetCount" : 0,
"slowAppendCount" : 0,
"slowPutCount" : 0,
"SplitTime_num_ops" : 0,
"SplitTime_min" : 0,
"SplitTime_max" : 0,
"SplitTime_mean" : 0.0,
"SplitTime_median" : 0.0,
"SplitTime_75th_percentile" : 0.0,
"SplitTime_95th_percentile" : 0.0,
"SplitTime_99th_percentile" : 0.0,
"Increment_num_ops" : 0,
"Increment_min" : 0,
"Increment_max" : 0,
"Increment_mean" : 0.0,
"Increment_median" : 0.0,
"Increment_75th_percentile" : 0.0,
"Increment_95th_percentile" : 0.0,
"Increment_99th_percentile" : 0.0
} ]
}
2015-03-21 02:11:07,046 INFO [IPC Server handler 9 on 42912]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:54374 is added to
blk_1073741829_1005{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-20704df4-ae22-40c7-8dcd-b411672f6980:NORMAL|RBW]]}
size 83
2015-03-21 02:11:07,068 ERROR [B.defaultRpcServer.handler=3,queue=0,port=38530]
master.MasterRpcServices(328): Region server
proserpina.apache.org,51019,1426903864687 reported a fatal error:
ABORTING region server proserpina.apache.org,51019,1426903864687: Unhandled:
java.lang.reflect.InvocationTargetException
Cause:
java.io.IOException: java.lang.reflect.InvocationTargetException
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:239)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:217)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:118)
at
org.apache.hadoop.hbase.replication.regionserver.ReplicationSink.<init>(ReplicationSink.java:90)
at
org.apache.hadoop.hbase.replication.regionserver.Replication.startReplicationService(Replication.java:213)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.startServiceThreads(HRegionServer.java:1684)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1323)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.handleReportForDutyResponse(MiniHBaseCluster.java:129)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:874)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:155)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:107)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:139)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at
org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:304)
at
org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:137)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:237)
... 17 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at org.apache.zookeeper.ClientCnxn.start(ClientCnxn.java:405)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:450)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:142)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.<init>(RecoverableZooKeeper.java:129)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:155)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:163)
at
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:131)
at
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:43)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1617)
at
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:101)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:844)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:604)
... 22 more
2015-03-21 02:11:07,072 DEBUG [M:0;proserpina:38530] wal.FSHLog(1105): Moved 1
WAL file(s) to
/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/oldWALs
2015-03-21 02:11:07,072 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(946): Stopping infoServer
2015-03-21 02:11:07,072 INFO [M:0;proserpina:38530] wal.FSHLog(1108): Closed
WAL: FSHLog proserpina.apache.org%2C38530%2C1426903862856.default:(num
1426903866643)
2015-03-21 02:11:07,073 DEBUG [M:0;proserpina:38530] ipc.AsyncRpcClient(288):
Stopping async HBase RPC client
2015-03-21 02:11:07,078 INFO [M:0;proserpina:38530] regionserver.Leases(146):
M:0;proserpina:38530 closing leases
2015-03-21 02:11:07,079 INFO [M:0;proserpina:38530] regionserver.Leases(149):
M:0;proserpina:38530 closed leases
2015-03-21 02:11:07,083 INFO [M:0;proserpina:38530] hbase.ChoreService(303):
Chore service for: proserpina.apache.org,38530,1426903862856 had
[java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@16af619d,
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@5431960d,
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@3725aaa,
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@489e50aa,
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@11c1ee25]
on shutdown
2015-03-21 02:11:07,089 INFO [RS:0;proserpina:51019] log.Slf4jLog(67): Stopped
[email protected]:0
2015-03-21 02:11:07,195 INFO [MemStoreFlusher.1]
regionserver.MemStoreFlusher$FlushHandler(281): MemStoreFlusher.1 exiting
2015-03-21 02:11:07,195 INFO [MemStoreFlusher.0]
regionserver.MemStoreFlusher$FlushHandler(281): MemStoreFlusher.0 exiting
2015-03-21 02:11:07,195 INFO [RS:0;proserpina:51019]
snapshot.RegionServerSnapshotManager(134): Stopping RegionServerSnapshotManager
abruptly.
2015-03-21 02:11:07,196 INFO [RS:0;proserpina:51019]
flush.RegionServerFlushTableProcedureManager(113): Stopping region server flush
procedure manager abruptly.
2015-03-21 02:11:07,196 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(988): aborting server
proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:07,196 DEBUG [RS:0;proserpina:51019]
zookeeper.MetaTableLocator(617): Stopping MetaTableLocator
2015-03-21 02:11:07,197 INFO [RS:0;proserpina:51019]
client.ConnectionManager$HConnectionImplementation(1637): Closing zookeeper
sessionid=0x14c3a1908520002
2015-03-21 02:11:07,247 DEBUG [RS:0;proserpina:51019] ipc.AsyncRpcClient(288):
Stopping async HBase RPC client
2015-03-21 02:11:07,248 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(1019): stopping server
proserpina.apache.org,51019,1426903864687; all regions closed.
2015-03-21 02:11:07,248 DEBUG [RS:0;proserpina:51019] wal.FSHLog(1147): Closing
WAL writer in
/user/jenkins/test-data/94a25112-a109-49f6-9edd-039180512bc6/WALs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:07,273 INFO [IPC Server handler 4 on 42912]
blockmanagement.BlockManager(2383): BLOCK* addStoredBlock: blockMap updated:
127.0.0.1:54374 is added to
blk_1073741830_1006{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1,
replicas=[ReplicaUnderConstruction[[DISK]DS-a2d75141-06d2-4e32-a0f7-999efc9c94fc:NORMAL|RBW]]}
size 83
2015-03-21 02:11:07,279 DEBUG [RS:0;proserpina:51019] ipc.AsyncRpcClient(288):
Stopping async HBase RPC client
2015-03-21 02:11:07,281 INFO [RS:0;proserpina:51019] regionserver.Leases(146):
RS:0;proserpina:51019 closing leases
2015-03-21 02:11:07,281 INFO [RS:0;proserpina:51019] regionserver.Leases(149):
RS:0;proserpina:51019 closed leases
2015-03-21 02:11:07,282 INFO [RS:0;proserpina:51019] hbase.ChoreService(303):
Chore service for: proserpina.apache.org,51019,1426903864687 had
[java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@47ce7f8f,
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@20ac9761,
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@23067209]
on shutdown
2015-03-21 02:11:07,285 DEBUG
[RpcServer.reader=1,bindAddress=proserpina.apache.org,port=38530]
ipc.RpcServer$Listener(841): RpcServer.listener,port=38530: DISCONNECTING
client 67.195.81.189:42362 because read count=-1. Number of active connections:
1
2015-03-21 02:11:07,286 DEBUG [AsyncRpcChannel-pool2-t1]
ipc.AsyncRpcChannel$8(574): IPC Client (-1955567686) connection to
proserpina.apache.org/67.195.81.189:38530 from jenkins.hfs.0: closed
2015-03-21 02:11:07,737 INFO
[proserpina.apache.org,38530,1426903862856_splitLogManager__ChoreService_1]
hbase.ScheduledChore(179): Chore: SplitLogManager Timeout Monitor was stopped
2015-03-21 02:11:07,840 INFO
[master/proserpina.apache.org/67.195.81.189:0.logRoller]
regionserver.LogRoller(159): LogRoller exiting.
2015-03-21 02:11:07,841 INFO [M:0;proserpina:38530]
regionserver.CompactSplitThread(393): Waiting for Split Thread to finish...
2015-03-21 02:11:07,841 INFO [M:0;proserpina:38530]
regionserver.CompactSplitThread(393): Waiting for Merge Thread to finish...
2015-03-21 02:11:07,841 INFO [M:0;proserpina:38530]
regionserver.CompactSplitThread(393): Waiting for Large Compaction Thread to
finish...
2015-03-21 02:11:07,841 INFO [M:0;proserpina:38530]
regionserver.CompactSplitThread(393): Waiting for Small Compaction Thread to
finish...
2015-03-21 02:11:07,847 INFO
[master/proserpina.apache.org/67.195.81.189:0.leaseChecker]
regionserver.Leases(146):
master/proserpina.apache.org/67.195.81.189:0.leaseChecker closing leases
2015-03-21 02:11:07,847 INFO
[master/proserpina.apache.org/67.195.81.189:0.leaseChecker]
regionserver.Leases(149):
master/proserpina.apache.org/67.195.81.189:0.leaseChecker closed leases
2015-03-21 02:11:07,853 DEBUG [M:0;proserpina:38530] master.HMaster(1045):
Stopping service threads
2015-03-21 02:11:07,853 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:38530-0x14c3a1908520000,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeDeleted, state=SyncConnected,
path=/hbase/replication/rs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:07,864 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): regionserver:51019-0x14c3a1908520001,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeDeleted, state=SyncConnected, path=/hbase/master
2015-03-21 02:11:07,866 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:38530-0x14c3a1908520000,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeDeleted, state=SyncConnected, path=/hbase/master
2015-03-21 02:11:07,866 INFO [M:0;proserpina:38530]
client.ConnectionManager$HConnectionImplementation(1637): Closing zookeeper
sessionid=0x14c3a1908520004
2015-03-21 02:11:07,867 DEBUG [main-EventThread] zookeeper.ZKUtil(488):
regionserver:51019-0x14c3a1908520001, quorum=localhost:55239, baseZNode=/hbase
Set watcher on znode that does not yet exist, /hbase/master
2015-03-21 02:11:07,867 DEBUG [main-EventThread] zookeeper.ZKUtil(488):
master:38530-0x14c3a1908520000, quorum=localhost:55239, baseZNode=/hbase Set
watcher on znode that does not yet exist, /hbase/master
2015-03-21 02:11:07,872 DEBUG [M:0;proserpina:38530] ipc.AsyncRpcClient(288):
Stopping async HBase RPC client
2015-03-21 02:11:07,873 INFO [M:0;proserpina:38530] hbase.ChoreService(303):
Chore service for: proserpina.apache.org,38530,1426903862856_splitLogManager_
had [] on shutdown
2015-03-21 02:11:07,874 INFO [M:0;proserpina:38530]
flush.MasterFlushTableProcedureManager(78): stop: server shutting down.
2015-03-21 02:11:07,875 INFO [M:0;proserpina:38530] ipc.RpcServer(2169):
Stopping server on 38530
2015-03-21 02:11:07,876 INFO [RpcServer.listener,port=38530]
ipc.RpcServer$Listener(745): RpcServer.listener,port=38530: stopping
2015-03-21 02:11:07,876 INFO [RpcServer.responder]
ipc.RpcServer$Responder(987): RpcServer.responder: stopped
2015-03-21 02:11:07,876 INFO [RpcServer.responder]
ipc.RpcServer$Responder(890): RpcServer.responder: stopping
2015-03-21 02:11:07,891 INFO
[regionserver/proserpina.apache.org/67.195.81.189:0.leaseChecker]
regionserver.Leases(146):
regionserver/proserpina.apache.org/67.195.81.189:0.leaseChecker closing leases
2015-03-21 02:11:07,892 INFO
[regionserver/proserpina.apache.org/67.195.81.189:0.leaseChecker]
regionserver.Leases(149):
regionserver/proserpina.apache.org/67.195.81.189:0.leaseChecker closed leases
2015-03-21 02:11:07,891 INFO
[regionserver/proserpina.apache.org/67.195.81.189:0.logRoller]
regionserver.LogRoller(159): LogRoller exiting.
2015-03-21 02:11:07,893 INFO [RS:0;proserpina:51019]
regionserver.CompactSplitThread(393): Waiting for Split Thread to finish...
2015-03-21 02:11:07,893 INFO [RS:0;proserpina:51019]
regionserver.CompactSplitThread(393): Waiting for Merge Thread to finish...
2015-03-21 02:11:07,893 INFO [RS:0;proserpina:51019]
regionserver.CompactSplitThread(393): Waiting for Large Compaction Thread to
finish...
2015-03-21 02:11:07,893 INFO [RS:0;proserpina:51019]
regionserver.CompactSplitThread(393): Waiting for Small Compaction Thread to
finish...
2015-03-21 02:11:07,897 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): regionserver:51019-0x14c3a1908520001,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeDeleted, state=SyncConnected,
path=/hbase/rs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:07,898 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:38530-0x14c3a1908520000,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeDeleted, state=SyncConnected,
path=/hbase/rs/proserpina.apache.org,38530,1426903862856
2015-03-21 02:11:07,898 INFO [main-EventThread]
zookeeper.RegionServerTracker(118): RegionServer ephemeral node deleted,
processing expiration [proserpina.apache.org,38530,1426903862856]
2015-03-21 02:11:07,898 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): regionserver:51019-0x14c3a1908520001,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs
2015-03-21 02:11:07,898 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): master:38530-0x14c3a1908520000,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs
2015-03-21 02:11:07,906 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(1063): stopping server
proserpina.apache.org,38530,1426903862856; zookeeper connection closed.
2015-03-21 02:11:07,906 INFO [M:0;proserpina:38530]
regionserver.HRegionServer(1066): M:0;proserpina:38530 exiting
2015-03-21 02:11:07,923 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): regionserver:51019-0x14c3a1908520001,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeDeleted, state=SyncConnected,
path=/hbase/replication/rs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:07,923 INFO [RS:0;proserpina:51019] ipc.RpcServer(2169):
Stopping server on 51019
2015-03-21 02:11:07,924 INFO [RpcServer.responder]
ipc.RpcServer$Responder(987): RpcServer.responder: stopped
2015-03-21 02:11:07,925 INFO [RpcServer.responder]
ipc.RpcServer$Responder(890): RpcServer.responder: stopping
2015-03-21 02:11:07,928 INFO [RpcServer.listener,port=51019]
ipc.RpcServer$Listener(745): RpcServer.listener,port=51019: stopping
2015-03-21 02:11:07,936 DEBUG [main-EventThread]
zookeeper.ZooKeeperWatcher(388): regionserver:51019-0x14c3a1908520001,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event,
type=NodeDeleted, state=SyncConnected,
path=/hbase/rs/proserpina.apache.org,51019,1426903864687
2015-03-21 02:11:07,952 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(1063): stopping server
proserpina.apache.org,51019,1426903864687; zookeeper connection closed.
2015-03-21 02:11:07,954 INFO [RS:0;proserpina:51019]
regionserver.HRegionServer(1066): RS:0;proserpina:51019 exiting
2015-03-21 02:14:25,110 ERROR [main] hbase.MiniHBaseCluster(229): Error
starting cluster
java.lang.RuntimeException: Master not initialized after 200000ms seconds
at
org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:225)
at
org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:437)
at
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:224)
at
org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:93)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1008)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:968)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:842)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:836)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:780)
at
org.apache.hadoop.hbase.security.access.TestAccessController.setupBeforeClass(TestAccessController.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at org.junit.runner.JUnitCore.run(JUnitCore.java:138)
at
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:107)
at
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:77)
at
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:53)
at
org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144)
at
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
2015-03-21 02:14:25,112 DEBUG [main] util.JVMClusterUtil(241): Shutting down
HBase Cluster
2015-03-21 02:14:25,113 DEBUG [main] coprocessor.CoprocessorHost(269): Stop
coprocessor org.apache.hadoop.hbase.security.access.AccessController
2015-03-21 02:14:25,113 DEBUG [main] coprocessor.CoprocessorHost(269): Stop
coprocessor
org.apache.hadoop.hbase.security.access.SecureTestUtil$MasterSyncObserver
2015-03-21 02:14:25,115 WARN [main] zookeeper.RecoverableZooKeeper(276):
Possibly transient ZooKeeper, quorum=localhost:55239,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase/running
2015-03-21 02:14:26,115 WARN [main] zookeeper.RecoverableZooKeeper(276):
Possibly transient ZooKeeper, quorum=localhost:55239,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase/running
2015-03-21 02:14:28,116 WARN [main] zookeeper.RecoverableZooKeeper(276):
Possibly transient ZooKeeper, quorum=localhost:55239,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase/running
2015-03-21 02:14:32,119 WARN [main] zookeeper.RecoverableZooKeeper(276):
Possibly transient ZooKeeper, quorum=localhost:55239,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase/running
2015-03-21 02:14:40,122 WARN [main] zookeeper.RecoverableZooKeeper(276):
Possibly transient ZooKeeper, quorum=localhost:55239,
exception=org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /hbase/running
2015-03-21 02:14:40,126 ERROR [main] zookeeper.RecoverableZooKeeper(278):
ZooKeeper delete failed after 4 attempts
2015-03-21 02:14:40,126 ERROR [main] master.HMaster(1877): ZooKeeper exception
trying to set cluster as down in ZK
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode =
Session expired for /hbase/running
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:179)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1341)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:1330)
at
org.apache.hadoop.hbase.zookeeper.ClusterStatusTracker.setClusterDown(ClusterStatusTracker.java:82)
at org.apache.hadoop.hbase.master.HMaster.shutdown(HMaster.java:1875)
at
org.apache.hadoop.hbase.util.JVMClusterUtil.shutdown(JVMClusterUtil.java:254)
at
org.apache.hadoop.hbase.LocalHBaseCluster.shutdown(LocalHBaseCluster.java:444)
at
org.apache.hadoop.hbase.MiniHBaseCluster.shutdown(MiniHBaseCluster.java:517)
at
org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:230)
at
org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:93)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1008)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:968)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:842)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:836)
at
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:780)
at
org.apache.hadoop.hbase.security.access.TestAccessController.setupBeforeClass(TestAccessController.java:191)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runners.Suite.runChild(Suite.java:127)
at org.junit.runners.Suite.runChild(Suite.java:26)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at org.junit.runner.JUnitCore.run(JUnitCore.java:138)
at
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:107)
at
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:77)
at
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:53)
at
org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144)
at
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
at
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
2015-03-21 02:14:40,129 INFO [main] util.JVMClusterUtil(317): Shutdown of 1
master(s) and 1 regionserver(s) complete
2015-03-21 02:14:40,129 INFO [main] hbase.HBaseTestingUtility(1068): Shutting
down minicluster
2015-03-21 02:14:40,135 INFO [main] zookeeper.MiniZooKeeperCluster(319):
Shutdown MiniZK cluster with all ZK servers
2015-03-21 02:14:40,136 WARN [main] datanode.DirectoryScanner(375):
DirectoryScanner: shutdown has been called
2015-03-21 02:14:40,154 INFO [main] log.Slf4jLog(67): Stopped
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
2015-03-21 02:14:40,232 DEBUG
[proserpina:38530.activeMasterManager-EventThread]
zookeeper.ZooKeeperWatcher(388): replicationLogCleaner-0x14c3a1908520005,
quorum=localhost:55239, baseZNode=/hbase Received ZooKeeper Event, type=None,
state=Disconnected, path=null
2015-03-21 02:14:40,232 DEBUG
[proserpina:38530.activeMasterManager-EventThread]
zookeeper.ZooKeeperWatcher(477): replicationLogCleaner-0x14c3a1908520005,
quorum=localhost:55239, baseZNode=/hbase Received Disconnected from ZooKeeper,
ignoring
2015-03-21 02:14:40,258 WARN [DataNode:
[[[DISK]file:/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa/dfs/data/data1/,
[DISK]file:/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:42912] datanode.BPServiceActor(722):
BPOfferService for Block pool BP-27234177-67.195.81.189-1426903856367 (Datanode
Uuid bf0194c5-f830-4a3c-a9e4-a3e5516d3196) service to localhost/127.0.0.1:42912
interrupted
2015-03-21 02:14:40,259 WARN [DataNode:
[[[DISK]file:/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa/dfs/data/data1/,
[DISK]file:/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/target/test-data/5d031ef7-820c-47c8-9f14-20a54526f48a/dfscluster_d80b1d01-c629-432f-832e-eb812b7810fa/dfs/data/data2/]]
heartbeating to localhost/127.0.0.1:42912] datanode.BPServiceActor(845):
Ending block pool service for: Block pool
BP-27234177-67.195.81.189-1426903856367 (Datanode Uuid
bf0194c5-f830-4a3c-a9e4-a3e5516d3196) service to localhost/127.0.0.1:42912
2015-03-21 02:14:40,297 WARN
[org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@10c8414a]
blockmanagement.DecommissionManager$Monitor(78): Monitor interrupted:
java.lang.InterruptedException: sleep interrupted
2015-03-21 02:14:40,353 INFO [main] log.Slf4jLog(67): Stopped
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0
2015-03-21 02:14:40,547 INFO [main] hbase.HBaseTestingUtility(1081):
Minicluster is down
{code}
> Fix the critical ancient loopholes in security testing infrastructure.
> ----------------------------------------------------------------------
>
> Key: HBASE-13294
> URL: https://issues.apache.org/jira/browse/HBASE-13294
> Project: HBase
> Issue Type: Bug
> Reporter: Srikanth Srungarapu
> Assignee: Srikanth Srungarapu
> Attachments: HBASE-13294.patch, HBASE-13294_v2.patch,
> HBASE-13294_v3.patch, HBASE-13294_v3.patch, HBASE-13294_v4.patch
>
>
> Unfortunately, the "verifyDenied" method doesn't fail when action parameter
> returns null. The relevant code snippet
> {code}
> try {
> Object obj = user.runAs(action);
> if (requireException) {
> fail("Expected exception was not thrown for user '" +
> user.getShortName() + "'");
> }
> if (obj != null && obj instanceof List<?>) {
> List<?> results = (List<?>) obj;
> if (results != null && !results.isEmpty()) {
> fail("Unexpected results for user '" + user.getShortName() + "'");
> }
> }
> }
> {code}
> As you can see, when obj is null, it returns silently.
> Fixing this issue has uncovered another major bug. While constructing
> actions, we're using TEST_UTIL.getConnection(), which replaces the "doAs"
> user with the user who initiated the connection. I really am grateful to
> [~mbertozzi] without whom debugging this would have been a nightmare.
> Now, fixing these two issues have uncovered more issues in our tests :). The
> main one is we're allowing the table owner to truncate table in code. But, in
> test, we're not allowing him. We should either remove the code that allows
> owner or document that the table owner can truncate table.
> The other minor issues include granting permissions to namespace, but
> checking whether user was able to access tables inside other namespace.
> That's it, folks!
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)