Re: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-13 Thread 倪项菲


Hi Ankit,




   I have put phoenix-4.14.0-HBase-1.2-server.jar to /opt/hbase-1.2.6/lib,is 
this correct?











 



发件人: Ankit Singhal

时间: 2018/08/14(星期二)02:25

收件人: user;

主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 
1.2.6

Skipping sanity checks may unstabilize the functionality on which Phoenix 
relies on, SplitPolicy should have been loaded to prevent splitting of 
SYSTEM.CATALOG table, so to actually fix the issue please check if you have 
right phoenix-server.jar in HBase classpath

"Unable to load configured region split policy 
'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 'SYSTEM.CATALOG' Set 
hbase.table.sanity.checks to false at conf or table descriptor if you want to 
bypass sanity checks"





Regards,

Ankit Singhal




On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲  wrote:




Thanks all.




at last I set hbase.table.sanity.checks to false in hbase-site.xml and restart 
hbase cluster,it works.











 



发件人: Josh Elser

时间: 2018/08/07(星期二)20:58

收件人: user;





主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6



"Phoenix-server" refers to the phoenix-$VERSION-server.jar that is 
either included in the binary tarball or is generated by the official 
source-release.

"Deploying" it means copying the jar to $HBASE_HOME/lib.

On 8/6/18 9:56 PM, 倪项菲 wrote:
> 
> Hi Zhang Yun,
>  the link you mentioned tells us to add the phoenix jar to  hbase 
> lib directory,it doesn't tell us how to deploy the phoenix server.
> 
> 发件人: Jaanai Zhang <mailto:cloud.pos...@gmail.com>
> 时间: 2018/08/07(星期二)09:36
>     收件人: user <mailto:user@phoenix.apache.org>;
> 主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> with hbase 1.2.6
> 
> reference link: http://phoenix.apache.org/installation.html
> 
> 
> 
> Yun Zhang
> Best regards!
> 
> 
> 2018-08-07 9:30 GMT+08:00 倪项菲  <mailto:nixiangfei_...@chinamobile.com>>:
> 
> Hi Zhang Yun,
>  how to deploy the Phoenix server?I just have the infomation
> from phoenix website,it doesn't mention the phoenix server
> 
> 发件人: Jaanai Zhang <mailto:cloud.pos...@gmail.com>
>     时间: 2018/08/07(星期二)09:16
>     收件人: user <mailto:user@phoenix.apache.org>;
> 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> with hbase 1.2.6
> 
> Please ensure your Phoenix server was deployed and had resarted
> 
> 
> 
> Yun Zhang
> Best regards!
> 
> 
> 2018-08-07 9:10 GMT+08:00 倪项菲  <mailto:nixiangfei_...@chinamobile.com>>:
> 
> 
> Hi Experts,
>  I am using HBase 1.2.6,the cluster is working good with
> HMaster HA,but when we integrate phoenix with hbase,it
> failed,below are the steps
>  1,download apache-phoenix-4.14.0-HBase-1.2-bin from
> http://phoenix.apache.org,the copy the tar file to the HMaster
> and unzip the file
> 
> 2,copy phoenix-core-4.14.0-HBase-1.2.jar 
> phoenix-4.14.0-HBase-1.2-server.jar
> to all HBase nodes including HMaster and HRegionServer ,put them
> to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
>  3,restart hbase cluster
>  4,then start to use phoenix,but it return below error:
> [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
> 
> plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
> Setting property: [incremental, false]
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none
> none org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to
> 
> jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> 
> [jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> 
> [jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
> <http://www.slf4j.org/codes.html#multiple_bindings> for an
> explanation.
> 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java
> classes where applicable
>

Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-13 Thread Ankit Singhal
Skipping sanity checks may unstabilize the functionality on which Phoenix
relies on, SplitPolicy should have been loaded to prevent splitting of
SYSTEM.CATALOG table, so to actually fix the issue please check if you have
right phoenix-server.jar in HBase classpath

"Unable to load configured region split policy
'org.apache.phoenix.schema.MetaDataSplitPolicy'
for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
or table descriptor if you want to bypass sanity checks"

Regards,
Ankit Singhal

On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲  wrote:

> Thanks all.
>
> at last I set hbase.table.sanity.checks to false in hbase-site.xml and
> restart hbase cluster,it works.
>
>
>
> 发件人: Josh Elser 
> 时间: 2018/08/07(星期二)20:58
> 收件人: user ;
> 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase
> 1.2.6
>
> "Phoenix-server" refers to the phoenix-$VERSION-server.jar that is
> either included in the binary tarball or is generated by the official
> source-release.
>
> "Deploying" it means copying the jar to $HBASE_HOME/lib.
>
> On 8/6/18 9:56 PM, 倪项菲 wrote:
> >
> > Hi Zhang Yun,
> > the link you mentioned tells us to add the phoenix jar to  hbase
> > lib directory,it doesn't tell us how to deploy the phoenix server.
> >
> > 发件人: Jaanai Zhang <mailto:cloud.pos...@gmail.com>
> > 时间: 2018/08/07(星期二)09:36
> > 收件人: user <mailto:user@phoenix.apache.org>;
> > 主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> > with hbase 1.2.6
> >
> > reference link: http://phoenix.apache.org/installation.html
> >
> >
> > 
> >Yun Zhang
> >Best regards!
> >
> >
> > 2018-08-07 9:30 GMT+08:00 倪项菲  > <mailto:nixiangfei_...@chinamobile.com>>:
> >
> > Hi Zhang Yun,
> > how to deploy the Phoenix server?I just have the infomation
> > from phoenix website,it doesn't mention the phoenix server
> >
> > 发件人: Jaanai Zhang <mailto:cloud.pos...@gmail.com>
> > 时间: 2018/08/07(星期二)09:16
> > 收件人: user <mailto:user@phoenix.apache.org>;
> > 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> > with hbase 1.2.6
> >
> > Please ensure your Phoenix server was deployed and had resarted
> >
> >
> > 
> >Yun Zhang
> >Best regards!
> >
> >
> > 2018-08-07 9:10 GMT+08:00 倪项菲  > <mailto:nixiangfei_...@chinamobile.com>>:
> >
> >
> > Hi Experts,
> > I am using HBase 1.2.6,the cluster is working good with
> > HMaster HA,but when we integrate phoenix with hbase,it
> > failed,below are the steps
> > 1,download apache-phoenix-4.14.0-HBase-1.2-bin from
> > http://phoenix.apache.org,the copy the tar file to the HMaster
> > and unzip the file
> >
> > 2,copy phoenix-core-4.14.0-HBase-1.2.jar phoenix-4.14.0-
> HBase-1.2-server.jar
> > to all HBase nodes including HMaster and HRegionServer ,put them
> > to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
> > 3,restart hbase cluster
> > 4,then start to use phoenix,but it return below error:
> > [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
> > plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,
> plat-ecloud01-bigdata-zk03
> > Setting property: [incremental, false]
> > Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> > issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none
> > none org.apache.phoenix.jdbc.PhoenixDriver
> > Connecting to
> > jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-
> bigdata-zk02,plat-ecloud01-bigdata-zk03
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> > [jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-
> 4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> > [jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-
> log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
> > <http://www.slf4j.org/codes.html#multiple_bindings> for an
> > explanation.
> > 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
> > native-hadoop library for your platform... using builtin-java
> > classes where applicable
> > Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to
> > load configured region split policy
> > 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table
> > 'SYSTEM.CATALOG' Set hbase.

Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-12 Thread 倪项菲


Thanks all.




at last I set hbase.table.sanity.checks to false in hbase-site.xml and restart 
hbase cluster,it works.











 



发件人: Josh Elser

时间: 2018/08/07(星期二)20:58

收件人: user;

主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 
1.2.6"Phoenix-server" refers to the phoenix-$VERSION-server.jar that is 
either included in the binary tarball or is generated by the official 
source-release.

"Deploying" it means copying the jar to $HBASE_HOME/lib.

On 8/6/18 9:56 PM, 倪项菲 wrote:
> 
> Hi Zhang Yun,
>  the link you mentioned tells us to add the phoenix jar to  hbase 
> lib directory,it doesn't tell us how to deploy the phoenix server.
> 
> 发件人: Jaanai Zhang <mailto:cloud.pos...@gmail.com>
> 时间: 2018/08/07(星期二)09:36
> 收件人: user <mailto:user@phoenix.apache.org>;
>     主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> with hbase 1.2.6
> 
> reference link: http://phoenix.apache.org/installation.html
> 
> 
> 
> Yun Zhang
> Best regards!
> 
> 
> 2018-08-07 9:30 GMT+08:00 倪项菲  <mailto:nixiangfei_...@chinamobile.com>>:
> 
> Hi Zhang Yun,
>  how to deploy the Phoenix server?I just have the infomation
> from phoenix website,it doesn't mention the phoenix server
> 
> 发件人: Jaanai Zhang <mailto:cloud.pos...@gmail.com>
>     时间: 2018/08/07(星期二)09:16
> 收件人: user <mailto:user@phoenix.apache.org>;
> 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> with hbase 1.2.6
> 
> Please ensure your Phoenix server was deployed and had resarted
> 
> 
> 
> Yun Zhang
> Best regards!
> 
> 
> 2018-08-07 9:10 GMT+08:00 倪项菲  <mailto:nixiangfei_...@chinamobile.com>>:
> 
> 
> Hi Experts,
>  I am using HBase 1.2.6,the cluster is working good with
> HMaster HA,but when we integrate phoenix with hbase,it
> failed,below are the steps
>  1,download apache-phoenix-4.14.0-HBase-1.2-bin from
> http://phoenix.apache.org,the copy the tar file to the HMaster
> and unzip the file
> 
> 2,copy phoenix-core-4.14.0-HBase-1.2.jar 
> phoenix-4.14.0-HBase-1.2-server.jar
> to all HBase nodes including HMaster and HRegionServer ,put them
> to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
>  3,restart hbase cluster
>  4,then start to use phoenix,but it return below error:
> [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
> 
> plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
> Setting property: [incremental, false]
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none
> none org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to
> 
> jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> 
> [jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> 
> [jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
> <http://www.slf4j.org/codes.html#multiple_bindings> for an
> explanation.
> 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java
> classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to
> load configured region split policy
> 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table
> 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
> or table descriptor if you want to bypass sanity checks
>  at
> 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
>  at
> 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
>  at
> org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
>  at
> 
> org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
>  at
>  

Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-07 Thread Josh Elser
"Phoenix-server" refers to the phoenix-$VERSION-server.jar that is 
either included in the binary tarball or is generated by the official 
source-release.


"Deploying" it means copying the jar to $HBASE_HOME/lib.

On 8/6/18 9:56 PM, 倪项菲 wrote:


Hi Zhang Yun,
     the link you mentioned tells us to add the phoenix jar to  hbase 
lib directory,it doesn't tell us how to deploy the phoenix server.


发件人: Jaanai Zhang <mailto:cloud.pos...@gmail.com>
时间: 2018/08/07(星期二)09:36
收件人: user <mailto:user@phoenix.apache.org>;
    主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
with hbase 1.2.6

reference link: http://phoenix.apache.org/installation.html



    Yun Zhang
    Best regards!


2018-08-07 9:30 GMT+08:00 倪项菲 <mailto:nixiangfei_...@chinamobile.com>>:


Hi Zhang Yun,
     how to deploy the Phoenix server?I just have the infomation
from phoenix website,it doesn't mention the phoenix server

发件人: Jaanai Zhang <mailto:cloud.pos...@gmail.com>
时间: 2018/08/07(星期二)09:16
收件人: user <mailto:user@phoenix.apache.org>;
    主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
with hbase 1.2.6

Please ensure your Phoenix server was deployed and had resarted



    Yun Zhang
    Best regards!


2018-08-07 9:10 GMT+08:00 倪项菲 mailto:nixiangfei_...@chinamobile.com>>:


Hi Experts,
     I am using HBase 1.2.6,the cluster is working good with
HMaster HA,but when we integrate phoenix with hbase,it
failed,below are the steps
     1,download apache-phoenix-4.14.0-HBase-1.2-bin from
http://phoenix.apache.org,the copy the tar file to the HMaster
and unzip the file

2,copy phoenix-core-4.14.0-HBase-1.2.jar phoenix-4.14.0-HBase-1.2-server.jar

to all HBase nodes including HMaster and HRegionServer ,put them
to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
     3,restart hbase cluster
     4,then start to use phoenix,but it return below error:
[apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py

plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none
none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to

jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in

[jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in

[jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
<http://www.slf4j.org/codes.html#multiple_bindings> for an
explanation.
18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java
classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to
load configured region split policy
'org.apache.phoenix.schema.MetaDataSplitPolicy' for table
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
or table descriptor if you want to bypass sanity checks
         at

org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
         at

org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
         at
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
         at

org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
         at

org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
         at
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
         at org.apache.hadoop.hbase.ipc.Ca
<http://org.apache.hadoop.hbase.ipc.Ca>llRunner.run(CallRunner.java:112)
         at

org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
         at
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
         at java.lang.Thread.run(Thread.java:745)
(state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNot

Re: RE: [External] Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread 倪项菲


Hi Lu Wei,

 do you mean that I need to set 





  hbase.table.sanity.checks 

  false 

 

in hbase-site.xml?





 



发件人: Lu, Wei

时间: 2018/08/07(星期二)09:33

收件人: user;Jaanai Zhang;

主题: RE: [External] Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin 
with hbase 1.2.6 

 

As the log infos, you should ‘Set hbase.table.sanity.checks to false at conf or 
table descriptor if you want to bypass sanity checks 

  

  

  

 

 

From: 倪项菲 [mailto:nixiangfei_...@chinamobile.com] 
 Sent: Tuesday, August 7, 2018 9:30 AM
 To: Jaanai Zhang ; user 
 Subject: [External] Re: Re: error when using 
apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6   

  

 

Hi Zhang Yun,  

 

how to deploy the Phoenix server?I just have the infomation from phoenix 
website,it doesn't mention the phoenix server  

 

  

 

  

 

  

 

   

 

 

发件人: Jaanai Zhang  

 

时间: 2018/08/07(星期二)09:16  

 

收件人: user;  

 

主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6   
 

 

Please ensure your Phoenix server was deployed and had resarted  

 

  

 

 

 

 

 

 

   

 

 

   Yun Zhang  

 

   Best regards!  

 



  

 

2018-08-07 9:10 GMT+08:00  倪项菲 :  

 

 

 

   

 

Hi Experts,  

 

I am using HBase 1.2.6,the cluster is working good with HMaster HA,but when 
we integrate phoenix with hbase,it failed,below are the steps  

 

1,download apache-phoenix-4.14.0-HBase-1.2-bin from 
http://phoenix.apache.org,the copy the tar file to the HMaster and unzip  the 
file  

 

2,copy phoenix-core-4.14.0-HBase-1.2.jar 
phoenix-4.14.0-HBase-1.2-server.jar to all HBase nodes including HMaster and 
HRegionServer ,put them to hbasehome/lib,my path is /opt/hbase-1.2.6/lib  

 

3,restart hbase cluster  

 

4,then start to use phoenix,but it return below error:  

 

  [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py 
plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
  

 

Setting property: [incremental, false]  

 

Setting property: [isolation, TRANSACTION_READ_COMMITTED]  

 

issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none 
org.apache.phoenix.jdbc.PhoenixDriver  

 

Connecting to 
jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
  

 

SLF4J: Class path contains multiple SLF4J bindings.  

 

SLF4J: Found binding in 
[jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  

 

SLF4J: Found binding in 
[jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  

 

SLF4J: See  http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.  

 

18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable  

 

Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured 
region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor  if you want to bypass sanity checks  

 

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
  

 

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
  

 

at 
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)  

 

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
  

 

at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
  

 

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)  

 

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)  

 

at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)  

 

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)  

 

at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)  

 

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region 
split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks  to false at conf or table 
descriptor if you want to bypass sanity checks  

 

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
  

 

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
  

 

at 
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)  

 

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463

Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread Jaanai Zhang
reference link: http://phoenix.apache.org/installation.html



   Yun Zhang
   Best regards!


2018-08-07 9:30 GMT+08:00 倪项菲 :

> Hi Zhang Yun,
> how to deploy the Phoenix server?I just have the infomation from
> phoenix website,it doesn't mention the phoenix server
>
>
>
>
> 发件人: Jaanai Zhang 
> 时间: 2018/08/07(星期二)09:16
> 收件人: user ;
> 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase
> 1.2.6
>
> Please ensure your Phoenix server was deployed and had resarted
>
>
> 
>Yun Zhang
>Best regards!
>
>
> 2018-08-07 9:10 GMT+08:00 倪项菲 :
>
>>
>> Hi Experts,
>> I am using HBase 1.2.6,the cluster is working good with HMaster
>> HA,but when we integrate phoenix with hbase,it failed,below are the steps
>> 1,download apache-phoenix-4.14.0-HBase-1.2-bin from
>> http://phoenix.apache.org,the copy the tar file to the HMaster and unzip
>> the file
>> 2,copy phoenix-core-4.14.0-HBase-1.2.jar 
>> phoenix-4.14.0-HBase-1.2-server.jar
>> to all HBase nodes including HMaster and HRegionServer ,put them to
>> hbasehome/lib,my path is /opt/hbase-1.2.6/lib
>> 3,restart hbase cluster
>> 4,then start to use phoenix,but it return below error:
>>   [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
>> plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-e
>> cloud01-bigdata-zk03
>> Setting property: [incremental, false]
>> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
>> issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none
>> org.apache.phoenix.jdbc.PhoenixDriver
>> Connecting to jdbc:phoenix:plat-ecloud01-bigdata-zk01,
>> plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/opt/apache-phoenix-
>> 4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/or
>> g/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.6/sh
>> are/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/im
>> pl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load
>> configured region split policy 
>> 'org.apache.phoenix.schema.MetaDataSplitPolicy'
>> for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
>> or table descriptor if you want to bypass sanity checks
>> at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionF
>> orFailure(HMaster.java:1754)
>> at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescr
>> iptor(HMaster.java:1615)
>> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.
>> java:1541)
>> at org.apache.hadoop.hbase.master.MasterRpcServices.createTable
>> (MasterRpcServices.java:463)
>> at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$
>> MasterService$2.callBlockingMethod(MasterProtos.java:55682)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:21
>> 96)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:
>> 112)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExec
>> utor.java:133)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.ja
>> va:108)
>> at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured
>> region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for
>> table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or
>> table descriptor if you want to bypass sanity checks
>> at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionF
>> orFailure(HMaster.java:1754)
>> at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescr
>> iptor(HMaster.java:1615)
>> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.
>> java:1541)
>> at org.apache.hadoop.hbase.master.MasterRpcServices.createTable
>> (MasterRpcServices.java:463)
>> at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$
>> MasterService$2.callBlockingMethod(MasterPro

RE: [External] Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread Lu, Wei
As the log infos, you should ‘Set hbase.table.sanity.checks to false at conf or 
table descriptor if you want to bypass sanity checks



From: 倪项菲 [mailto:nixiangfei_...@chinamobile.com]
Sent: Tuesday, August 7, 2018 9:30 AM
To: Jaanai Zhang ; user 
Subject: [External] Re: Re: error when using 
apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

Hi Zhang Yun,
how to deploy the Phoenix server?I just have the infomation from phoenix 
website,it doesn't mention the phoenix server

[cid:image001.jpg@01D42E31.A55E6890]


发件人: Jaanai Zhang<mailto:cloud.pos...@gmail.com>
时间: 2018/08/07(星期二)09:16
收件人: user<mailto:user@phoenix.apache.org>;
主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6
Please ensure your Phoenix server was deployed and had resarted



   Yun Zhang
   Best regards!


2018-08-07 9:10 GMT+08:00 倪项菲 
mailto:nixiangfei_...@chinamobile.com>>:

Hi Experts,
I am using HBase 1.2.6,the cluster is working good with HMaster HA,but when 
we integrate phoenix with hbase,it failed,below are the steps
1,download apache-phoenix-4.14.0-HBase-1.2-bin from 
http://phoenix.apache.org,the copy the tar file to the HMaster and unzip the 
file
2,copy phoenix-core-4.14.0-HBase-1.2.jar 
phoenix-4.14.0-HBase-1.2-server.jar to all HBase nodes including HMaster and 
HRegionServer ,put them to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
3,restart hbase cluster
4,then start to use phoenix,but it return below error:
  [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py 
plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none 
org.apache.phoenix.jdbc.PhoenixDriver
Connecting to 
jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured 
region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region 
split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.r

Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread 倪项菲


Hi Zhang Yun,

how to deploy the Phoenix server?I just have the infomation from phoenix 
website,it doesn't mention the phoenix server





   





 



发件人: Jaanai Zhang

时间: 2018/08/07(星期二)09:16

收件人: user;

主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

Please ensure your Phoenix server was deployed and had resarted
















   Yun Zhang

   Best regards!



 


2018-08-07 9:10 GMT+08:00 倪项菲 :









Hi Experts,

I am using HBase 1.2.6,the cluster is working good with HMaster HA,but when 
we integrate phoenix with hbase,it failed,below are the steps

1,download apache-phoenix-4.14.0-HBase-1.2-bin from 
http://phoenix.apache.org,the copy the tar file to the HMaster and unzip the 
file

2,copy phoenix-core-4.14.0-HBase-1.2.jar 
phoenix-4.14.0-HBase-1.2-server.jar to all HBase nodes including HMaster and 
HRegionServer ,put them to hbasehome/lib,my path is /opt/hbase-1.2.6/lib

3,restart hbase cluster

4,then start to use phoenix,but it return below error:

  [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py 
plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03

Setting property: [incremental, false]

Setting property: [isolation, TRANSACTION_READ_COMMITTED]

issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none 
org.apache.phoenix.jdbc.PhoenixDriver

Connecting to 
jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in 
[jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured 
region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)

at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region 
split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)

at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:745)




at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)

at 
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2717

Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread Jaanai Zhang
Please ensure your Phoenix server was deployed and had resarted



   Yun Zhang
   Best regards!


2018-08-07 9:10 GMT+08:00 倪项菲 :

>
> Hi Experts,
> I am using HBase 1.2.6,the cluster is working good with HMaster HA,but
> when we integrate phoenix with hbase,it failed,below are the steps
> 1,download apache-phoenix-4.14.0-HBase-1.2-bin from
> http://phoenix.apache.org,the copy the tar file to the HMaster and unzip
> the file
> 2,copy phoenix-core-4.14.0-HBase-1.2.jar 
> phoenix-4.14.0-HBase-1.2-server.jar
> to all HBase nodes including HMaster and HRegionServer ,put them to
> hbasehome/lib,my path is /opt/hbase-1.2.6/lib
> 3,restart hbase cluster
> 4,then start to use phoenix,but it return below error:
>   [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
> plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-
> ecloud01-bigdata-zk03
> Setting property: [incremental, false]
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-
> bigdata-zk02,plat-ecloud01-bigdata-zk03
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/opt/apache-phoenix-
> 4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.6/
> share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/
> impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load
> configured region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy'
> for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
> or table descriptor if you want to bypass sanity checks
> at org.apache.hadoop.hbase.master.HMaster.
> warnOrThrowExceptionForFailure(HMaster.java:1754)
> at org.apache.hadoop.hbase.master.HMaster.
> sanityCheckTableDescriptor(HMaster.java:1615)
> at org.apache.hadoop.hbase.master.HMaster.createTable(
> HMaster.java:1541)
> at org.apache.hadoop.hbase.master.MasterRpcServices.
> createTable(MasterRpcServices.java:463)
> at org.apache.hadoop.hbase.protobuf.generated.
> MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
> RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.
> java:108)
> at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException:
> Unable to load configured region split policy 
> 'org.apache.phoenix.schema.MetaDataSplitPolicy'
> for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
> or table descriptor if you want to bypass sanity checks
> at org.apache.hadoop.hbase.master.HMaster.
> warnOrThrowExceptionForFailure(HMaster.java:1754)
> at org.apache.hadoop.hbase.master.HMaster.
> sanityCheckTableDescriptor(HMaster.java:1615)
> at org.apache.hadoop.hbase.master.HMaster.createTable(
> HMaster.java:1541)
> at org.apache.hadoop.hbase.master.MasterRpcServices.
> createTable(MasterRpcServices.java:463)
> at org.apache.hadoop.hbase.protobuf.generated.
> MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
> RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.
> java:108)
> at java.lang.Thread.run(Thread.java:745)
>
> at org.apache.phoenix.util.ServerUtil.parseServerException(
> ServerUtil.java:144)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.
> ensureTableCreated(ConnectionQueryServicesImpl.java:1197)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.
> createTable(ConnectionQueryServicesImpl.java:1491)
> at org.apache.phoenix.schema.MetaDataClient.createTableInternal(
> MetaDataClient.java:2717)
> at org.apache.phoenix.schema.MetaDataClient.createTable(
> MetaDataClient.java:1114)
> at org.apache.phoenix.compile.CreateTableCompiler$1.execute(
> CreateTableCompiler.java:192)
> at