HI Ted, I am using hbase version :hbase-0.98.6.1-hadoop2 Hadoop version :hadoop-2.5.1
Actual error is hbase(main):001:0>* list_peers* SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/beeshma/hbase-0.98.6.1-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/beeshma/hadoop-2.5.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. *ERROR: Missing pb magic PBUF prefix* Here is some help for this command: List all replication peer clusters. hbase> list_peers Thanks Beeshma On Sat, Oct 10, 2015 at 5:15 AM, Ted Yu <[email protected]> wrote: > The exception was due to un-protobuf'ed data in peer state znode. > > Which release of hbase are you using ? > > Consider posting the question on ngdata forum. > > Cheers > > > On Oct 10, 2015, at 3:24 AM, beeshma r <[email protected]> wrote: > > > > Hi > > > > i created Solr index using *HBase-indexer(NGDATA/hbase-indexer*) > > <https://github.com/NGDATA/hbase-indexer/wiki> . Afer that Regionserver > is > > failed due to below error > > > > 2015-10-08 09:33:17,115 INFO > > [regionserver60020-SendThread(localhost:2181)] zookeeper.ClientCnxn: > > Session establishment complete on server localhost/127.0.0.1:2181, > > sessionid = 0x15048410a180007, negotiated timeout = 90000 > > 2015-10-08 09:33:17,120 INFO [regionserver60020] > > regionserver.HRegionServer: STOPPED: Failed initialization > > 2015-10-08 09:33:17,122 ERROR [regionserver60020] > > regionserver.HRegionServer: Failed init > > java.io.IOException: Failed replication handler create > > at > > > org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:125) > > at > > > org.apache.hadoop.hbase.regionserver.HRegionServer.newReplicationInstance(HRegionServer.java:2427) > > at > > > org.apache.hadoop.hbase.regionserver.HRegionServer.createNewReplicationInstance(HRegionServer.java:2397) > > at > > > org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1529) > > at > > > org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1286) > > at > > > org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:862) > > at java.lang.Thread.run(Thread.java:724) > > Caused by: org.apache.hadoop.hbase.replication.ReplicationException: > Error > > connecting to peer with id=Indexer_myindexer1 > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectToPeer(ReplicationPeersZKImpl.java:248) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectExistingPeers(ReplicationPeersZKImpl.java:416) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.init(ReplicationPeersZKImpl.java:103) > > at > > > org.apache.hadoop.hbase.replication.regionserver.Replication.initialize(Replication.java:120) > > ... 6 more > > Caused by: org.apache.hadoop.hbase.replication.ReplicationException: > Error > > starting the peer state tracker for peerId=Indexer_myindexer1 > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getPeer(ReplicationPeersZKImpl.java:513) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.connectToPeer(ReplicationPeersZKImpl.java:246) > > ... 9 more > > Caused by: > org.apache.zookeeper.KeeperException$DataInconsistencyException: > > KeeperErrorCode = DataInconsistency > > at org.apache.hadoop.hbase.zookeeper.ZKUtil.convert(ZKUtil.java:1859) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeer.startStateTracker(ReplicationPeer.java:102) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.getPeer(ReplicationPeersZKImpl.java:511) > > ... 10 more > > Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException: > > Missing pb magic PBUF prefix > > at > > > org.apache.hadoop.hbase.protobuf.ProtobufUtil.expectPBMagicPrefix(ProtobufUtil.java:256) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeer.parseStateFrom(ReplicationPeer.java:304) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeer.isStateEnabled(ReplicationPeer.java:293) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeer.readPeerStateZnode(ReplicationPeer.java:107) > > at > > > org.apache.hadoop.hbase.replication.ReplicationPeer.startStateTracker(ReplicationPeer.java:100) > > ... 11 more > > > > > > *This is my Hbase-site.xml configration* > > > > <?xml version="1.0"?> > > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> > > <configuration> > > <property> > > <name>hbase.master</name> > > <value>master:9000</value> > > </property> > > <property> > > <name>hbase.rootdir</name> > > <value>hdfs://localhost:9000/hbase</value> > > </property> > > <property> > > <name>hbase.zookeeper.property.dataDir</name> > > <value>/home/beeshma/zookeeper</value> > > </property> > > <property> > > <name>hbase.cluster.distributed</name> > > <value>true</value> > > </property> > > <!-- SEP is basically replication, so enable it --> > > <property> > > <name>hbase.replication</name> > > <value>true</value> > > </property> > > <!-- Source ratio of 100% makes sure that each SEP consumer is actually > > used (otherwise, some can sit idle, especially with small clusters) > > --> > > <property> > > <name>replication.source.ratio</name> > > <value>1.0</value> > > </property> > > <!-- Maximum number of hlog entries to replicate in one go. If this is > > large, and a consumer takes a while to process the events, the > > HBase rpc call will time out. --> > > <property> > > <name>replication.source.nb.capacity</name> > > <value>1000</value> > > </property> > > <!-- A custom replication source that fixes a few things and adds > > some functionality (doesn't interfere with normal replication > > usage). --> > > <property> > > <name>replication.replicationsource.implementation</name> > > <value>com.ngdata.sep.impl.SepReplicationSource</value> > > </property> > > </configuration> > > > > May i know what cloud be the remedy for this? > > > > Thanks > > Beeshma > --
