Thanks for the quick response! That seems to be working perfectly. It would be nice if this were included in the documentation since Cloudera installs are relatively common.
Cheers, -- Michael Mior [email protected] 2015-11-03 10:39 GMT-05:00 Andrew Purtell <[email protected]>: > Try building Phoenix from > https://github.com/chiastic-security/phoenix-for-cloudera and install the > resulting server jar. CDH includes some local changes to HBase which cause > binary compatibility issues. > > > On Nov 3, 2015, at 7:09 AM, Michael Mior <[email protected]> wrote: > > I have a working HBase installation, but when I drop the Phoenix > server JAR on my master and region server, I get the error message > below. > > I'm using the Cloudera 5.4.8 packages of HBase 1.0.0. Currently I only > have a two node installation with the YARN ResourceManager, the HDFS > NameNode, ZooKeeper, and the HBase master on one node and the > NodeManager, HDFS DataNode, HBase region server and thrift server on > the other. (I'm using two nodes and no replication just to play around > with Phoenix). > > I've tried both Phoenix 4.6.0 and 4.5.2 and I get the same error. Any > help figuring out what is going on would be greatly appreciated. > Thanks! > > 2015-11-03 09:31:38,695 ERROR > [B.defaultRpcServer.handler=4,queue=1,port=60020] > coprocessor.MetaDataEndpointImpl: createTable failed > java.lang.NoSuchMethodError: > org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan; > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:973) > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1049) > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1223) > at > org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11619) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7060) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1746) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1728) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31447) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > 2015-11-03 09:32:07,767 ERROR > [B.defaultRpcServer.handler=15,queue=0,port=60020] > coprocessor.MetaDataEndpointImpl: createTable failed > java.lang.NoSuchMethodError: > org.apache.hadoop.hbase.client.Scan.setRaw(Z)Lorg/apache/hadoop/hbase/client/Scan; > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildDeletedTable(MetaDataEndpointImpl.java:973) > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1049) > at > org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1223) > at > org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11619) > at > org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7060) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1746) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1728) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31447) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > > Cheers, > -- > Michael Mior > [email protected]
