Hi Ted i attached the coprocessor using the shell. Coprocessor jar contains RowCountEndpoint + ExampleProtos (taken from hbase-example.jar)* do i need to add anything else in that jar?* Client has some debugging code + RowCountEndpointTest command that i used
disable 'author_30YR' author_30YR',METHOD=>'table_att','coprocessor'=>'/user/cloud/ICDS/CoPro/lib/RowCountCoPro0.095.jar|org.apache.hadoop.hbase.coprocessor.RowCountEndpoint|1001' enable 'author_30YR' On Mon, Sep 29, 2014 at 8:23 PM, Ted Yu <[email protected]> wrote: > bq. rowcount endpoint and the Example protos > > Can you describe how you deployed the rowcount endpoint on regionservers ? > > bq. want to utilize Bucket Cache of HBase > > You need 0.96+ in order to utilize Bucket Cache > > Cheers > > On Mon, Sep 29, 2014 at 1:48 AM, Vikram Singh Chandel < > [email protected]> wrote: > > > Hi > > > > We are trying to migrating to* HBase 0.98.1(CDH 5.1.1) from 0.94.6*, > > to use *Bucket > > Cache + CoProcessor* and to check the performance improvement but looking > > into the API i found that a lot has changed. > > > > I tried using the HBase-example jar for the row count coprocessor, the > > coprocessor jar contains the *rowcount endpoint and the Example > protos(do i > > need to add anything else) *and i used TestRowCountEndpoint code as my > > client (added a main method to call Coprocessor Service) > > > > Table is spilt on 13 regions over a 4 node cluster (POC test cluster) > > > > Getting following exceptions: > > > > *RS1 (Region Server 1)* > > Unexpected throwable object > > com.google.protobuf.UninitializedMessageException: Message missing > required > > fields: count > > at > > > > > com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770) > > at > > > > > org.apache.hadoop.hbase.coprocessor.example.generated.ExampleProtos$CountResponse$Builder.build(ExampleProtos.java:684) > > at > > > > > org.apache.hadoop.hbase.coprocessor.example.generated.ExampleProtos$CountResponse$Builder.build(ExampleProtos.java:628) > > at > > > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5554) > > at > > > > > org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3300) > > at > > > > > org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3282) > > at > > > > > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29501) > > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012) > > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98) > > at > > > > > org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160) > > at > > > > > org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38) > > at > > > > > org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110) > > at java.lang.Thread.run(Thread.java:745) > > > > *RS2* > > (responseTooSlow): > > > > > {"processingtimems":37683,"call":"ExecService(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$CoprocessorServiceRequest)","client":" > > 10.206.55.0:53769", > > > > > "starttimems":1411725518216,"queuetimems":3,"class":"HRegionServer","responsesize":199,"method":"ExecService"} > > > > RpcServer.listener,port=60020: count of bytes read: 0 > > java.io.IOException: Connection reset by peer > > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > > at sun.nio.ch.IOUtil.read(IOUtil.java:197) > > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) > > at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2229) > > at > > > > > org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1415) > > at > > org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(RpcServer.java:790) > > at > > > > > org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:581) > > at > > > > > org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:556) > > at > > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > at > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > at java.lang.Thread.run(Thread.java:745) > > > > *RS3* > > Scanner 69 lease expired on region > > imsf_urg_sep23_Author,,1411488149043.8b17b191ff617b79f0e47137d969cc7e. > > > > > > > > > > *We basically want to utilize Bucket Cache of HBase so if there's any > > version of HBase which has older API code + Bucket Cache that would do > for > > us. Because right now we are stuck with this newer HBase version* > > > > -- > > *Regards* > > > > *VIKRAM SINGH CHANDEL* > > > > Please do not print this email unless it is absolutely necessary,Reduce. > > Reuse. Recycle. Save our planet. > > > -- *Regards* *VIKRAM SINGH CHANDEL* Please do not print this email unless it is absolutely necessary,Reduce. Reuse. Recycle. Save our planet.
