Re: Issue in upgrading phoenix : java.lang.ArrayIndexOutOfBoundsException: SYSTEM:CATALOG 63

2018-10-17 Thread Tanvi Bhandari
HI Jaanai Zhang,

When you say migrate the data, do you mean somehow export the data from
phoenix tables(phoenix 4.6) and bulk-insert into new phoenix
tables(phoenix-4.14) ?
Do you have any data migration script or something which I can take help of
?

Thanks,
Tanvi


On Wed, Oct 17, 2018 at 5:41 PM Jaanai Zhang  wrote:

> It seems that is impossible to upgrade from Phoenix-4.6 to Phoenix-4.14,
> the schema of SYSTEM  had been changed or some futures will be
> incompatible.  Maybe you can migrate data from Phoenix-4.6 to Phoenix-4.14,
> this solution can ensure that everything will be right.
>
> 
>Jaanai Zhang
>Best regards!
>
>
>
> Tanvi Bhandari  于2018年10月17日周三 下午3:48写道:
>
>> @Shamvenk
>>
>> Yes I did check the STATS table from hbase shell, it's not empty.
>>
>> After dropping all SYSTEM tables and mapping hbase-tables to phoenix
>> tables by executing all DDLs, I am seeing new issue.
>>
>> I have a table and an index on that table. Number of records in index
>> table and main table are not matching now.
>> select count(*) from "my_index";
>> select count(COL) from "my_table";-- where COL is not part of index.
>>
>> Can someone tell me what can be done here? Is there any easier way to
>> upgrade from Phoenix-4.6 to Phoenix-4.14?
>>
>>
>>
>> On Thu, Sep 13, 2018 at 8:55 PM venk sham  wrote:
>>
>>> Did you check system.stats,. If it us empty, needs to be rebuilt by
>>> running major compact on hbasr
>>>
>>> On Tue, Sep 11, 2018, 11:33 AM Tanvi Bhandari 
>>> wrote:
>>>
 Hi,



 I am trying to upgrade the phoenix binaries in my setup from
 phoenix-4.6 (had optional concept of schema) to phoenix-4.14 (schema is a
 must in here).

 Earlier, I had the phoenix-4.6-hbase-1.1 binaries. When I try to run
 the phoenix-4.14-hbase-1.3 on the same data. Hbase comes up fine But when I
 try to connect to phoenix using sqline client,  I get the following error
 on *console*:



 18/09/07 04:22:48 WARN ipc.CoprocessorRpcChannel: Call failed on
 IOException

 org.apache.hadoop.hbase.DoNotRetryIOException:
 org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63

 at
 org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)

 at
 org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3572)

 at
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422)

 at
 org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)

 at
 org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)

 at
 org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)

 at
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)

 at
 org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)

 at
 org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)

 at
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

 at
 org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

 at java.lang.Thread.run(Thread.java:745)

 Caused by: java.lang.ArrayIndexOutOfBoundsException: 63

 at
 org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)

 at
 org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)

 at
 org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)

 at
 org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1046)

 at
 org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:587)

at
 org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1305)

 at
 org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3568)

 ... 10 more



 at
 sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

 at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

 at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

 at
 java.lang.reflect.Constructor.newInstance(Constructor.java:423)

 at
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)

 at
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)

Re: Encountering BufferUnderflowException when querying from Phoenix

2018-10-17 Thread William Shen
Thank Jaanai.

At first we thought it was data issue too, but as we restored the table
from snapshot to a separate schema on the same cluster to triage, the
exception no longer happens... Does that give further clue on what the
issue might've been?

0: jdbc:phoenix:journalnode,test> SELECT A, B, C, D  FROM SCHEMA.TABLE
 where A = 13100423;

java.nio.BufferUnderflowException

at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)

at java.nio.ByteBuffer.get(ByteBuffer.java:715)

at
org.apache.phoenix.schema.types.PArrayDataType.createPhoenixArray(PArrayDataType.java:1028)

at
org.apache.phoenix.schema.types.PArrayDataType.toObject(PArrayDataType.java:375)

at
org.apache.phoenix.schema.types.PVarcharArray.toObject(PVarcharArray.java:65)

at org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1011)

at
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:75)

at
org.apache.phoenix.jdbc.PhoenixResultSet.getString(PhoenixResultSet.java:609)

at sqlline.Rows$Row.(Rows.java:183)

at sqlline.BufferedRows.(BufferedRows.java:38)

at sqlline.SqlLine.print(SqlLine.java:1660)

at sqlline.Commands.execute(Commands.java:833)

at sqlline.Commands.sql(Commands.java:732)

at sqlline.SqlLine.dispatch(SqlLine.java:813)

at sqlline.SqlLine.begin(SqlLine.java:686)

at sqlline.SqlLine.start(SqlLine.java:398)

at sqlline.SqlLine.main(SqlLine.java:291)



0: jdbc:phoenix:journalnode,test> SELECT A, B, C, D  FROM SCHEMA.CORRUPTION
where A = 13100423;

+---+++-+

|A | B  | C  |D |

+---+++-+

| 13100423  | 5159   | 7  | ['female']  |

+---+++-+

1 row selected (1.76 seconds)

On Sun, Oct 14, 2018 at 8:39 PM Jaanai Zhang  wrote:

> It looks a bug that the remained part greater than retrieved the length in
> ByteBuffer, Maybe the position of ByteBuffer or the length of target byte
> array exists some problems.
>
> 
>Jaanai Zhang
>Best regards!
>
>
>
> William Shen  于2018年10月12日周五 下午11:53写道:
>
>> Hi all,
>>
>> We are running Phoenix 4.13, and periodically we would encounter the
>> following exception when querying from Phoenix in our staging environment.
>> Initially, we thought we had some incompatible client version connecting
>> and creating data corruption, but after ensuring that we are only
>> connecting with 4.13 clients, we still see this issue come up from time to
>> time. So far, fortunately, since it is in staging, we are able to identify
>> and delete the data to restore service.
>>
>> However, would like to ask for guidance on what else we could look for to
>> identify the cause of this exception. Could this perhaps caused by
>> something other than data corruption?
>>
>> Thanks in advance!
>>
>> The exception looks like:
>>
>> 18/10/12 15:45:58 WARN scheduler.TaskSetManager: Lost task 32.2 in stage
>> 14.0 (TID 1275, ...datanode..., executor 82):
>> java.nio.BufferUnderflowException
>>
>> at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
>>
>> at java.nio.ByteBuffer.get(ByteBuffer.java:715)
>>
>> at
>> org.apache.phoenix.schema.types.PArrayDataType.createPhoenixArray(PArrayDataType.java:1028)
>>
>> at
>> org.apache.phoenix.schema.types.PArrayDataType.toObject(PArrayDataType.java:375)
>>
>> at
>> org.apache.phoenix.schema.types.PVarcharArray.toObject(PVarcharArray.java:65)
>>
>> at org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1011)
>>
>> at
>> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:75)
>>
>> at
>> org.apache.phoenix.jdbc.PhoenixResultSet.getObject(PhoenixResultSet.java:525)
>>
>> at
>> org.apache.phoenix.spark.PhoenixRecordWritable$$anonfun$readFields$1.apply$mcVI$sp(PhoenixRecordWritable.scala:96)
>>
>> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>>
>> at
>> org.apache.phoenix.spark.PhoenixRecordWritable.readFields(PhoenixRecordWritable.scala:93)
>>
>> at
>> org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:168)
>>
>> at
>> org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:174)
>>
>> at
>> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>>
>> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>>
>> at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1596)
>>
>> at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
>>
>> at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1157)
>>
>> at
>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1870)
>>
>> at
>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1870)
>>
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>
>> at org.apache.spark.scheduler.Task.run(Task.scala:89)
>>
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:229)
>>
>> at
>> 

Re: Phoenix metrics error on thin client

2018-10-17 Thread Josh Elser
The methods that you are invoking assume that the Phoenix JDBC driver 
(the java class org.apache.phoenix.jdbc.PhoenixDriver) is in use. It's 
not, so you get this error.


The Phoenix "thick" JDBC driver is what's running inside of the Phoenix 
Query Server, just not in your local JVM. As such, you need to look at 
PQS for metrics.


You probably want to look at what was done in 
https://issues.apache.org/jira/browse/PHOENIX-3655.


On 10/16/18 2:49 PM, Monil Gandhi wrote:

Hello,
I am trying to collect some metrics on certain queries. Here is the code 
that I have


Properties props =new Properties();
props.setProperty(QueryServices.COLLECT_REQUEST_LEVEL_METRICS, "true");
props.setProperty("phoenix.trace.frequency", "always");

try (Connection conn = DriverManager.getConnection(url, props)) {
 conn.setAutoCommit(true);

PreparedStatement stmt = conn.prepareStatement(query);

Map overAllQueryMetrics =null;
Map> 
requestReadMetrics =null;
try (ResultSet rs = stmt.executeQuery()) {
 rs.next();
requestReadMetrics = PhoenixRuntime.getRequestReadMetricInfo(rs);
// log or report metrics as needed
PhoenixRuntime.resetMetrics(rs);
rs.close();
}
}


However, rs.next() throws the following error
java.sql.SQLException: does not implement 'class 
org.apache.phoenix.jdbc.PhoenixResultSet'


I am not sure why the error is happening. Are metrics not supported with 
thin client?


If not how do I get query level metrics?

Thanks


Re: Issue in upgrading phoenix : java.lang.ArrayIndexOutOfBoundsException: SYSTEM:CATALOG 63

2018-10-17 Thread Jaanai Zhang
It seems that is impossible to upgrade from Phoenix-4.6 to Phoenix-4.14,
the schema of SYSTEM  had been changed or some futures will be
incompatible.  Maybe you can migrate data from Phoenix-4.6 to Phoenix-4.14,
this solution can ensure that everything will be right.


   Jaanai Zhang
   Best regards!



Tanvi Bhandari  于2018年10月17日周三 下午3:48写道:

> @Shamvenk
>
> Yes I did check the STATS table from hbase shell, it's not empty.
>
> After dropping all SYSTEM tables and mapping hbase-tables to phoenix
> tables by executing all DDLs, I am seeing new issue.
>
> I have a table and an index on that table. Number of records in index
> table and main table are not matching now.
> select count(*) from "my_index";
> select count(COL) from "my_table";-- where COL is not part of index.
>
> Can someone tell me what can be done here? Is there any easier way to
> upgrade from Phoenix-4.6 to Phoenix-4.14?
>
>
>
> On Thu, Sep 13, 2018 at 8:55 PM venk sham  wrote:
>
>> Did you check system.stats,. If it us empty, needs to be rebuilt by
>> running major compact on hbasr
>>
>> On Tue, Sep 11, 2018, 11:33 AM Tanvi Bhandari 
>> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> I am trying to upgrade the phoenix binaries in my setup from phoenix-4.6
>>> (had optional concept of schema) to phoenix-4.14 (schema is a must in
>>> here).
>>>
>>> Earlier, I had the phoenix-4.6-hbase-1.1 binaries. When I try to run the
>>> phoenix-4.14-hbase-1.3 on the same data. Hbase comes up fine But when I try
>>> to connect to phoenix using sqline client,  I get the following error on
>>> *console*:
>>>
>>>
>>>
>>> 18/09/07 04:22:48 WARN ipc.CoprocessorRpcChannel: Call failed on
>>> IOException
>>>
>>> org.apache.hadoop.hbase.DoNotRetryIOException:
>>> org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
>>>
>>> at
>>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3572)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422)
>>>
>>> at
>>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>>
>>> at
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
>>>
>>> at
>>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>>
>>> at
>>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>>
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
>>>
>>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
>>>
>>> at
>>> org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
>>>
>>> at
>>> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1046)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:587)
>>>
>>>at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1305)
>>>
>>> at
>>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3568)
>>>
>>> ... 10 more
>>>
>>>
>>>
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>>
>>> at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>>
>>> at
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>>
>>> at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>>>
>>> at
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>>>
>>> at
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>>>
>>> at
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:326)
>>>
>>> at
>>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1629)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104)
>>>
>>> at
>>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94)
>>>
>>> 

Re: Issue in upgrading phoenix : java.lang.ArrayIndexOutOfBoundsException: SYSTEM:CATALOG 63

2018-10-17 Thread Tanvi Bhandari
@Shamvenk

Yes I did check the STATS table from hbase shell, it's not empty.

After dropping all SYSTEM tables and mapping hbase-tables to phoenix tables
by executing all DDLs, I am seeing new issue.

I have a table and an index on that table. Number of records in index table
and main table are not matching now.
select count(*) from "my_index";
select count(COL) from "my_table";-- where COL is not part of index.

Can someone tell me what can be done here? Is there any easier way to
upgrade from Phoenix-4.6 to Phoenix-4.14?



On Thu, Sep 13, 2018 at 8:55 PM venk sham  wrote:

> Did you check system.stats,. If it us empty, needs to be rebuilt by
> running major compact on hbasr
>
> On Tue, Sep 11, 2018, 11:33 AM Tanvi Bhandari 
> wrote:
>
>> Hi,
>>
>>
>>
>> I am trying to upgrade the phoenix binaries in my setup from phoenix-4.6
>> (had optional concept of schema) to phoenix-4.14 (schema is a must in
>> here).
>>
>> Earlier, I had the phoenix-4.6-hbase-1.1 binaries. When I try to run the
>> phoenix-4.14-hbase-1.3 on the same data. Hbase comes up fine But when I try
>> to connect to phoenix using sqline client,  I get the following error on
>> *console*:
>>
>>
>>
>> 18/09/07 04:22:48 WARN ipc.CoprocessorRpcChannel: Call failed on
>> IOException
>>
>> org.apache.hadoop.hbase.DoNotRetryIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
>>
>> at
>> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120)
>>
>> at
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3572)
>>
>> at
>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422)
>>
>> at
>> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7435)
>>
>> at
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1875)
>>
>> at
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1857)
>>
>> at
>> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
>>
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
>>
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>>
>> at
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>>
>> at
>> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>>
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
>>
>> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517)
>>
>> at
>> org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
>>
>> at
>> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
>>
>> at
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1046)
>>
>> at
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:587)
>>
>>at
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1305)
>>
>> at
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3568)
>>
>> ... 10 more
>>
>>
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>>
>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>>
>> at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>>
>> at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>>
>> at
>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:326)
>>
>> at
>> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1629)
>>
>> at
>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104)
>>
>> at
>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94)
>>
>> at
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
>>
>> at
>> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:107)
>>
>> at
>> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
>>
>> at
>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getVersion(MetaDataProtos.java:16739)
>>
>> at
>>