Please see http://hbase.apache.org/book/node.management.html
Especially 15.3.1.1

Did you stop the datanode on that server or let datanode run ?

Cheers

On Jun 3, 2014, at 5:47 AM, Ian Brooks <[email protected]> wrote:

> Hi,
> 
> I've been working on testing the procedures for taking a node out of service 
> in our test cluster and have come accross the same problem in both attempts 
> now whereby once a regionserver has been shutdown, 5-10 minutes later most of 
> the other regionservers crash with the below stack trace
> 
> The procedure im using for taking the first node out is,
> 
> 1. turn off balancer
> 2. use region_mover.rb to unload all regions from the target serve
> 3.once all regions have been moved, stop the region server
> 
> I'm using hbase 0.96.1 on hadoop 2.3.0, any ideas why this is happening and 
> how to stop it from happening?
> 
> 
> 2014-06-03 13:05:03,897 WARN  [ResponseProcessor for block 
> BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
> hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block 
> BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932
> java.io.EOFException: Premature EOF: no length prefix available
>        at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492)
>        at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721)
> 2014-06-03 13:05:03,898 WARN  [DataStreamer for file 
> /user/hbase/WALs/############,16020,1401716790638/############%2C16020%2C1401716790638.1401796562200
>  block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
> hdfs.DFSClient: Error Recovery for block 
> BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932 in pipeline 
> 10.143.38.117:50010, 10.143.38.116:50010, 10.143.38.100:50010: bad datanode 
> 10.143.38.117:50010
> 2014-06-03 13:05:03,915 WARN  [DataStreamer for file 
> /user/hbase/WALs/############,16020,1401716790638/############%2C16020%2C1401716790638.1401796562200
>  block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] 
> hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>        at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> 
>        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>        at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>        at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>        at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] wal.FSHLog: 
> syncer encountered error, will retry. txid=211
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>        at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> 
>        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>        at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>        at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>        at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,489 FATAL [RpcServer.handler=22,port=16020] wal.FSHLog: 
> Could not sync. Requesting roll of hlog
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>        at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> 
>        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>        at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>        at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>        at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,490 DEBUG [regionserver16020.logRoller] 
> regionserver.LogRoller: HLog roll requested
> 2014-06-03 13:05:48,490 DEBUG [RpcServer.handler=22,port=16020] 
> regionserver.HRegion: rollbackMemstore rolled back 1 keyvalues from start:0 
> to end:1
> 2014-06-03 13:05:48,609 DEBUG [regionserver16020.logRoller] wal.FSHLog: 
> cleanupCurrentWriter  waiting for transactions to get synced  total 211 
> synced till here 210
> 2014-06-03 13:05:48,609 FATAL [regionserver16020.logRoller] wal.FSHLog: Could 
> not sync. Requesting roll of hlog
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>        at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> 
>        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>        at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>        at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>        at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,609 ERROR [regionserver16020.logRoller] wal.FSHLog: 
> Failed close of HLog writer
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>        at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> 
>        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>        at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>        at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>        at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,610 FATAL [regionserver16020.logRoller] 
> regionserver.HRegionServer: ABORTING region server 
> ############,16020,1401716790638: Failed log close in log roller
> org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: 
> #1401796562200
>        at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.cleanupCurrentWriter(FSHLog.java:707)
>        at 
> org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:538)
>        at 
> org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:96)
>        at java.lang.Thread.run(Thread.java:744)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>        at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> 
>        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>        at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>        at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>        at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,610 FATAL [regionserver16020.logRoller] 
> regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.security.access.AccessController, 
> org.apache.hadoop.hbase.security.token.TokenProvider]
> 2014-06-03 13:05:48,612 ERROR [RpcServer.handler=21,port=16020] wal.FSHLog: 
> syncer encountered error, will retry. txid=212
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>        at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
> 
>        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>        at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>        at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>        at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,612 FATAL [RpcServer.handler=21,port=16020] wal.FSHLog: 
> Could not sync. Requesting roll of hlog
> org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException):
>  0
>        at 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:415)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1300)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>        at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>        at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>        at java.lang.reflect.Method.invoke(Method.java:606)
>        at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
>        at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823)
>        at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475)
> 2014-06-03 13:05:48,612 DEBUG [RpcServer.handler=21,port=16020] 
> regionserver.HRegion: rollbackMemstore rolled back 1 keyvalues from start:0 
> to end:1
> 2014-06-03 13:05:48,624 INFO  [regionserver16020.logRoller] 
> regionserver.HRegionServer: STOPPED: Failed log close in log roller
> 2014-06-03 13:05:48,624 INFO  [regionserver16020.logRoller] 
> regionserver.LogRoller: LogRoller exiting.
> 2014-06-03 13:05:48,624 INFO  [regionserver16020] ipc.RpcServer: Stopping 
> server on 16020
> 2014-06-03 13:05:48,624 INFO  [RpcServer.handler=1,port=16020] ipc.RpcServer: 
> RpcServer.handler=1,port=16020: exiting
> 
> 
> -- 
> -Ian Brooks
> 

Reply via email to