There is an error in the documentation. The config setting should be as
following:
<property>
<name>hbase.regionserver.wal.codec</name>
<value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
From: Tu Pham Phuong <[email protected]>
Reply-To: <[email protected]>
Date: Tuesday, April 29, 2014 2:19 AM
To: user <[email protected]>
Subject: Phoenix add config then can not start Hbase region server
I am new user with Phoenix, when i added config like this step, Add the
following configuration to hbase-site.xml on all HBase nodes, Master and
Region Servers:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.1/bk_installing_manual
ly_book/content/rpm-chap-phoenix.html
1. <property>
2. <name>hbase.regionserver.wal.codec</name>
3.
<value>org.apache.hadoop.hbase.regionsserver.wal.IndexedWALEditCodec</value>
4. </property>
5.
Region server can not start and throw exception
2014-04-29 16:13:03,454 ERROR [Thread-19] hdfs.DFSClient: Failed to close
file
/apps/hbase/data/WALs/bd1.local,60020,1398762781286/bd1.local%2C60020%2C1398
762781286.1398762783098
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode
.LeaseExpiredException): No lease on
/apps/hbase/data/WALs/bd1.local,60020,1398762781286/bd1.local%2C60020%2C1398
762781286.1398762783098: File does not exist. [Lease. Holder:
DFSClient_hb_rs_bd1.local,60020,1398762781286_2011427490_33, pendingcreates:
1]
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.
java:2946)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNames
ystem.java:2766)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam
esystem.java:2674)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRp
cServer.java:584)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslator
PB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNam
enodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Proto
bufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
va:1557)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.jav
a:206)
at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocati
onHandler.java:190)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHand
ler.java:103)
at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBloc
k(ClientNamenodeProtocolTranslatorPB.java:361)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57
)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:272)
at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFS
OutputStream.java:1439)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DF
SOutputStream.java:1261)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java
:525)
2014-04-29 16:13:03,455 INFO [Thread-11] regionserver.ShutdownHook:
Shutdown hook finished.
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.