Re: RE: [External] Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread 倪项菲


Hi Lu Wei,

 do you mean that I need to set 





  hbase.table.sanity.checks 

  false 

 

in hbase-site.xml?





 



发件人: Lu, Wei

时间: 2018/08/07(星期二)09:33

收件人: user;Jaanai Zhang;

主题: RE: [External] Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin 
with hbase 1.2.6 

 

As the log infos, you should ‘Set hbase.table.sanity.checks to false at conf or 
table descriptor if you want to bypass sanity checks 

  

  

  

 

 

From: 倪项菲 [mailto:nixiangfei_...@chinamobile.com] 
 Sent: Tuesday, August 7, 2018 9:30 AM
 To: Jaanai Zhang ; user 
 Subject: [External] Re: Re: error when using 
apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6   

  

 

Hi Zhang Yun,  

 

how to deploy the Phoenix server?I just have the infomation from phoenix 
website,it doesn't mention the phoenix server  

 

  

 

  

 

  

 

   

 

 

发件人: Jaanai Zhang  

 

时间: 2018/08/07(星期二)09:16  

 

收件人: user;  

 

主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6   
 

 

Please ensure your Phoenix server was deployed and had resarted  

 

  

 

 

 

 

 

 

   

 

 

   Yun Zhang  

 

   Best regards!  

 



  

 

2018-08-07 9:10 GMT+08:00  倪项菲 :  

 

 

 

   

 

Hi Experts,  

 

I am using HBase 1.2.6,the cluster is working good with HMaster HA,but when 
we integrate phoenix with hbase,it failed,below are the steps  

 

1,download apache-phoenix-4.14.0-HBase-1.2-bin from 
http://phoenix.apache.org,the copy the tar file to the HMaster and unzip  the 
file  

 

2,copy phoenix-core-4.14.0-HBase-1.2.jar 
phoenix-4.14.0-HBase-1.2-server.jar to all HBase nodes including HMaster and 
HRegionServer ,put them to hbasehome/lib,my path is /opt/hbase-1.2.6/lib  

 

3,restart hbase cluster  

 

4,then start to use phoenix,but it return below error:  

 

  [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py 
plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
  

 

Setting property: [incremental, false]  

 

Setting property: [isolation, TRANSACTION_READ_COMMITTED]  

 

issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none 
org.apache.phoenix.jdbc.PhoenixDriver  

 

Connecting to 
jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
  

 

SLF4J: Class path contains multiple SLF4J bindings.  

 

SLF4J: Found binding in 
[jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  

 

SLF4J: Found binding in 
[jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  

 

SLF4J: See  http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.  

 

18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable  

 

Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured 
region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor  if you want to bypass sanity checks  

 

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
  

 

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
  

 

at 
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)  

 

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
  

 

at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
  

 

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)  

 

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)  

 

at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)  

 

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)  

 

at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)  

 

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region 
split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks  to false at conf or table 
descriptor if you want to bypass sanity checks  

 

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
  

 

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
  

 

at 
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)  

 

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
  

 


Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread Jaanai Zhang
reference link: http://phoenix.apache.org/installation.html



   Yun Zhang
   Best regards!


2018-08-07 9:30 GMT+08:00 倪项菲 :

> Hi Zhang Yun,
> how to deploy the Phoenix server?I just have the infomation from
> phoenix website,it doesn't mention the phoenix server
>
>
>
>
> 发件人: Jaanai Zhang 
> 时间: 2018/08/07(星期二)09:16
> 收件人: user ;
> 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase
> 1.2.6
>
> Please ensure your Phoenix server was deployed and had resarted
>
>
> 
>Yun Zhang
>Best regards!
>
>
> 2018-08-07 9:10 GMT+08:00 倪项菲 :
>
>>
>> Hi Experts,
>> I am using HBase 1.2.6,the cluster is working good with HMaster
>> HA,but when we integrate phoenix with hbase,it failed,below are the steps
>> 1,download apache-phoenix-4.14.0-HBase-1.2-bin from
>> http://phoenix.apache.org,the copy the tar file to the HMaster and unzip
>> the file
>> 2,copy phoenix-core-4.14.0-HBase-1.2.jar 
>> phoenix-4.14.0-HBase-1.2-server.jar
>> to all HBase nodes including HMaster and HRegionServer ,put them to
>> hbasehome/lib,my path is /opt/hbase-1.2.6/lib
>> 3,restart hbase cluster
>> 4,then start to use phoenix,but it return below error:
>>   [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
>> plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-e
>> cloud01-bigdata-zk03
>> Setting property: [incremental, false]
>> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
>> issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none
>> org.apache.phoenix.jdbc.PhoenixDriver
>> Connecting to jdbc:phoenix:plat-ecloud01-bigdata-zk01,
>> plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/opt/apache-phoenix-
>> 4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/or
>> g/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.6/sh
>> are/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/im
>> pl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load
>> configured region split policy 
>> 'org.apache.phoenix.schema.MetaDataSplitPolicy'
>> for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
>> or table descriptor if you want to bypass sanity checks
>> at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionF
>> orFailure(HMaster.java:1754)
>> at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescr
>> iptor(HMaster.java:1615)
>> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.
>> java:1541)
>> at org.apache.hadoop.hbase.master.MasterRpcServices.createTable
>> (MasterRpcServices.java:463)
>> at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$
>> MasterService$2.callBlockingMethod(MasterProtos.java:55682)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:21
>> 96)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:
>> 112)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExec
>> utor.java:133)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.ja
>> va:108)
>> at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured
>> region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for
>> table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or
>> table descriptor if you want to bypass sanity checks
>> at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionF
>> orFailure(HMaster.java:1754)
>> at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescr
>> iptor(HMaster.java:1615)
>> at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.
>> java:1541)
>> at org.apache.hadoop.hbase.master.MasterRpcServices.createTable
>> (MasterRpcServices.java:463)
>> at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$
>> MasterService$2.callBlockingMethod(MasterProtos.java:55682)
>> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:21
>> 96)
>> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:
>> 112)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExec
>> utor.java:133)
>> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.ja
>> va:108)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> at org.apache.phoenix.util.ServerUtil.parseServerException(Serv
>> erUtil.java:144)
>>  

RE: [External] Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread Lu, Wei
As the log infos, you should ‘Set hbase.table.sanity.checks to false at conf or 
table descriptor if you want to bypass sanity checks



From: 倪项菲 [mailto:nixiangfei_...@chinamobile.com]
Sent: Tuesday, August 7, 2018 9:30 AM
To: Jaanai Zhang ; user 
Subject: [External] Re: Re: error when using 
apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

Hi Zhang Yun,
how to deploy the Phoenix server?I just have the infomation from phoenix 
website,it doesn't mention the phoenix server

[cid:image001.jpg@01D42E31.A55E6890]


发件人: Jaanai Zhang
时间: 2018/08/07(星期二)09:16
收件人: user;
主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6
Please ensure your Phoenix server was deployed and had resarted



   Yun Zhang
   Best regards!


2018-08-07 9:10 GMT+08:00 倪项菲 
mailto:nixiangfei_...@chinamobile.com>>:

Hi Experts,
I am using HBase 1.2.6,the cluster is working good with HMaster HA,but when 
we integrate phoenix with hbase,it failed,below are the steps
1,download apache-phoenix-4.14.0-HBase-1.2-bin from 
http://phoenix.apache.org,the copy the tar file to the HMaster and unzip the 
file
2,copy phoenix-core-4.14.0-HBase-1.2.jar 
phoenix-4.14.0-HBase-1.2-server.jar to all HBase nodes including HMaster and 
HRegionServer ,put them to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
3,restart hbase cluster
4,then start to use phoenix,but it return below error:
  [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py 
plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none 
org.apache.phoenix.jdbc.PhoenixDriver
Connecting to 
jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured 
region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region 
split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)

at 

Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread 倪项菲


Hi Zhang Yun,

how to deploy the Phoenix server?I just have the infomation from phoenix 
website,it doesn't mention the phoenix server





   





 



发件人: Jaanai Zhang

时间: 2018/08/07(星期二)09:16

收件人: user;

主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

Please ensure your Phoenix server was deployed and had resarted
















   Yun Zhang

   Best regards!



 


2018-08-07 9:10 GMT+08:00 倪项菲 :









Hi Experts,

I am using HBase 1.2.6,the cluster is working good with HMaster HA,but when 
we integrate phoenix with hbase,it failed,below are the steps

1,download apache-phoenix-4.14.0-HBase-1.2-bin from 
http://phoenix.apache.org,the copy the tar file to the HMaster and unzip the 
file

2,copy phoenix-core-4.14.0-HBase-1.2.jar 
phoenix-4.14.0-HBase-1.2-server.jar to all HBase nodes including HMaster and 
HRegionServer ,put them to hbasehome/lib,my path is /opt/hbase-1.2.6/lib

3,restart hbase cluster

4,then start to use phoenix,but it return below error:

  [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py 
plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03

Setting property: [incremental, false]

Setting property: [isolation, TRANSACTION_READ_COMMITTED]

issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none 
org.apache.phoenix.jdbc.PhoenixDriver

Connecting to 
jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in 
[jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured 
region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)

at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region 
split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)

at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:745)




at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)

at 
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2717)

at 

Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread Jaanai Zhang
Please ensure your Phoenix server was deployed and had resarted



   Yun Zhang
   Best regards!


2018-08-07 9:10 GMT+08:00 倪项菲 :

>
> Hi Experts,
> I am using HBase 1.2.6,the cluster is working good with HMaster HA,but
> when we integrate phoenix with hbase,it failed,below are the steps
> 1,download apache-phoenix-4.14.0-HBase-1.2-bin from
> http://phoenix.apache.org,the copy the tar file to the HMaster and unzip
> the file
> 2,copy phoenix-core-4.14.0-HBase-1.2.jar 
> phoenix-4.14.0-HBase-1.2-server.jar
> to all HBase nodes including HMaster and HRegionServer ,put them to
> hbasehome/lib,my path is /opt/hbase-1.2.6/lib
> 3,restart hbase cluster
> 4,then start to use phoenix,but it return below error:
>   [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
> plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-
> ecloud01-bigdata-zk03
> Setting property: [incremental, false]
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-
> bigdata-zk02,plat-ecloud01-bigdata-zk03
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/opt/apache-phoenix-
> 4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.6/
> share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/
> impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load
> configured region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy'
> for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
> or table descriptor if you want to bypass sanity checks
> at org.apache.hadoop.hbase.master.HMaster.
> warnOrThrowExceptionForFailure(HMaster.java:1754)
> at org.apache.hadoop.hbase.master.HMaster.
> sanityCheckTableDescriptor(HMaster.java:1615)
> at org.apache.hadoop.hbase.master.HMaster.createTable(
> HMaster.java:1541)
> at org.apache.hadoop.hbase.master.MasterRpcServices.
> createTable(MasterRpcServices.java:463)
> at org.apache.hadoop.hbase.protobuf.generated.
> MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
> RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.
> java:108)
> at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException:
> Unable to load configured region split policy 
> 'org.apache.phoenix.schema.MetaDataSplitPolicy'
> for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
> or table descriptor if you want to bypass sanity checks
> at org.apache.hadoop.hbase.master.HMaster.
> warnOrThrowExceptionForFailure(HMaster.java:1754)
> at org.apache.hadoop.hbase.master.HMaster.
> sanityCheckTableDescriptor(HMaster.java:1615)
> at org.apache.hadoop.hbase.master.HMaster.createTable(
> HMaster.java:1541)
> at org.apache.hadoop.hbase.master.MasterRpcServices.
> createTable(MasterRpcServices.java:463)
> at org.apache.hadoop.hbase.protobuf.generated.
> MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
> RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.
> java:108)
> at java.lang.Thread.run(Thread.java:745)
>
> at org.apache.phoenix.util.ServerUtil.parseServerException(
> ServerUtil.java:144)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.
> ensureTableCreated(ConnectionQueryServicesImpl.java:1197)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.
> createTable(ConnectionQueryServicesImpl.java:1491)
> at org.apache.phoenix.schema.MetaDataClient.createTableInternal(
> MetaDataClient.java:2717)
> at org.apache.phoenix.schema.MetaDataClient.createTable(
> MetaDataClient.java:1114)
> at org.apache.phoenix.compile.CreateTableCompiler$1.execute(
> CreateTableCompiler.java:192)
> at 

error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-06 Thread 倪项菲







Hi Experts,

I am using HBase 1.2.6,the cluster is working good with HMaster HA,but when 
we integrate phoenix with hbase,it failed,below are the steps

1,download apache-phoenix-4.14.0-HBase-1.2-bin from 
http://phoenix.apache.org,the copy the tar file to the HMaster and unzip the 
file

2,copy phoenix-core-4.14.0-HBase-1.2.jar 
phoenix-4.14.0-HBase-1.2-server.jar to all HBase nodes including HMaster and 
HRegionServer ,put them to hbasehome/lib,my path is /opt/hbase-1.2.6/lib

3,restart hbase cluster

4,then start to use phoenix,but it return below error:

  [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py 
plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03

Setting property: [incremental, false]

Setting property: [isolation, TRANSACTION_READ_COMMITTED]

issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none none 
org.apache.phoenix.jdbc.PhoenixDriver

Connecting to 
jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in 
[jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured 
region split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)

at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)

org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Unable to load configured region 
split policy 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 
'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf or table 
descriptor if you want to bypass sanity checks

at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1754)

at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1615)

at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)

at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:463)

at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)

at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)

at java.lang.Thread.run(Thread.java:745)




at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)

at 
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2717)

at 
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)

at 
org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 

Re: Spark-Phoenix Plugin

2018-08-06 Thread James Taylor
For the UPSERTs on a PreparedStatement that are done by Phoenix for writing
in the Spark adapter, not that these are *not* doing RPCs to the HBase
server to write data (i.e. they are never committed). Instead the UPSERTs
are used to ensure that the correct serialization is performed given the
Phoenix schema. We use a PhoenixRuntime API to get the List from the
uncommitted data and then perform a rollback. Using this technique,
features like salting, column encoding, row timestamp, etc. will continue
to work with the Spark integration.

Thanks,
James

On Mon, Aug 6, 2018 at 7:44 AM, Jaanai Zhang  wrote:

> you can get better performance if directly read/write HBase. you also use
> spark-phoenix, this is an example, reading data from CSV file and writing
> into Phoenix table:
>
> def main(args: Array[String]): Unit = {
>
>   val sc = new SparkContext("local", "phoenix-test")
>   val path = "/tmp/data"
>   val hbaseConnectionString = "host1,host2,host3"
>   val customSchema = StructType(Array(
> StructField("O_ORDERKEY", StringType, true),
> StructField("O_CUSTKEY", StringType, true),
> StructField("O_ORDERSTATUS", StringType, true),
> StructField("O_TOTALPRICE", StringType, true),
> StructField("O_ORDERDATE", StringType, true),
> StructField("O_ORDERPRIORITY", StringType, true),
> StructField("O_CLERK", StringType, true),
> StructField("O_SHIPPRIORITY", StringType, true),
> StructField("O_COMMENT", StringType, true)))
>
>   //import com.databricks.spark.csv._
>   val sqlContext = new SQLContext(sc)
>
>   val df = sqlContext.read
> .format("com.databricks.spark.csv")
> .option("delimiter", "|")
> .option("header", "false")
> .schema(customSchema)
> .load(path)
>
>   val start = System.currentTimeMillis()
>   df.write.format("org.apache.phoenix.spark")
> .mode("overwrite")
> .option("table", "DATAX")
> .option("zkUrl", hbaseConnectionString)
> .save()
>
>   val end = System.currentTimeMillis()
>   print("taken time:" + ((end - start) / 1000) + "s")
> }
>
>
>
>
> 
>Yun Zhang
>Best regards!
>
>
> 2018-08-06 20:10 GMT+08:00 Brandon Geise :
>
>> Thanks for the reply Yun.
>>
>>
>>
>> I’m not quite clear how this would exactly help on the upsert side?  Are
>> you suggesting deriving the type from Phoenix then doing the
>> encoding/decoding and writing/reading directly from HBase?
>>
>>
>>
>> Thanks,
>>
>> Brandon
>>
>>
>>
>> *From: *Jaanai Zhang 
>> *Reply-To: *
>> *Date: *Sunday, August 5, 2018 at 9:34 PM
>> *To: *
>> *Subject: *Re: Spark-Phoenix Plugin
>>
>>
>>
>> You can get data type from Phoenix meta, then encode/decode data to
>> write/read data. I think this way is effective, FYI :)
>>
>>
>>
>>
>> 
>>
>>Yun Zhang
>>
>>Best regards!
>>
>>
>>
>>
>>
>> 2018-08-04 21:43 GMT+08:00 Brandon Geise :
>>
>> Good morning,
>>
>>
>>
>> I’m looking at using a combination of Hbase, Phoenix and Spark for a
>> project and read that using the Spark-Phoenix plugin directly is more
>> efficient than JDBC, however it wasn’t entirely clear from examples when
>> writing a dataframe if an upsert is performed and how much fine-grained
>> options there are for executing the upsert.  Any information someone can
>> share would be greatly appreciated!
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Brandon
>>
>>
>>
>
>


Re: Spark-Phoenix Plugin

2018-08-06 Thread Jaanai Zhang
you can get better performance if directly read/write HBase. you also use
spark-phoenix, this is an example, reading data from CSV file and writing
into Phoenix table:

def main(args: Array[String]): Unit = {

  val sc = new SparkContext("local", "phoenix-test")
  val path = "/tmp/data"
  val hbaseConnectionString = "host1,host2,host3"
  val customSchema = StructType(Array(
StructField("O_ORDERKEY", StringType, true),
StructField("O_CUSTKEY", StringType, true),
StructField("O_ORDERSTATUS", StringType, true),
StructField("O_TOTALPRICE", StringType, true),
StructField("O_ORDERDATE", StringType, true),
StructField("O_ORDERPRIORITY", StringType, true),
StructField("O_CLERK", StringType, true),
StructField("O_SHIPPRIORITY", StringType, true),
StructField("O_COMMENT", StringType, true)))

  //import com.databricks.spark.csv._
  val sqlContext = new SQLContext(sc)

  val df = sqlContext.read
.format("com.databricks.spark.csv")
.option("delimiter", "|")
.option("header", "false")
.schema(customSchema)
.load(path)

  val start = System.currentTimeMillis()
  df.write.format("org.apache.phoenix.spark")
.mode("overwrite")
.option("table", "DATAX")
.option("zkUrl", hbaseConnectionString)
.save()

  val end = System.currentTimeMillis()
  print("taken time:" + ((end - start) / 1000) + "s")
}





   Yun Zhang
   Best regards!


2018-08-06 20:10 GMT+08:00 Brandon Geise :

> Thanks for the reply Yun.
>
>
>
> I’m not quite clear how this would exactly help on the upsert side?  Are
> you suggesting deriving the type from Phoenix then doing the
> encoding/decoding and writing/reading directly from HBase?
>
>
>
> Thanks,
>
> Brandon
>
>
>
> *From: *Jaanai Zhang 
> *Reply-To: *
> *Date: *Sunday, August 5, 2018 at 9:34 PM
> *To: *
> *Subject: *Re: Spark-Phoenix Plugin
>
>
>
> You can get data type from Phoenix meta, then encode/decode data to
> write/read data. I think this way is effective, FYI :)
>
>
>
>
> 
>
>Yun Zhang
>
>Best regards!
>
>
>
>
>
> 2018-08-04 21:43 GMT+08:00 Brandon Geise :
>
> Good morning,
>
>
>
> I’m looking at using a combination of Hbase, Phoenix and Spark for a
> project and read that using the Spark-Phoenix plugin directly is more
> efficient than JDBC, however it wasn’t entirely clear from examples when
> writing a dataframe if an upsert is performed and how much fine-grained
> options there are for executing the upsert.  Any information someone can
> share would be greatly appreciated!
>
>
>
>
>
> Thanks,
>
> Brandon
>
>
>


Re: Spark-Phoenix Plugin

2018-08-06 Thread Josh Elser
Besides the distribution and parallelism of Spark as a distributed 
execution framework, I can't really see how phoenix-spark would be 
faster than the JDBC driver :). Phoenix-spark and the JDBC driver are 
using the same code under the hood.


Phoenix-spark is using the PhoenixOutputFormat (and thus, 
PhoenixRecordWriter) to write data to Phoenix. Maybe look at 
PhoenixRecordWritable, too. These ultimately are executing UPSERTs on a 
PreparedStatement.


There is also the CsvBulkLoadTool which can create HFiles to bulk load 
data in Phoenix. I'm not sure if phoenix-spark has something wired up 
that you can use to do this out of the box (certainly, you could do it 
yourself).


On 8/6/18 8:10 AM, Brandon Geise wrote:

Thanks for the reply Yun.

I’m not quite clear how this would exactly help on the upsert side?  Are 
you suggesting deriving the type from Phoenix then doing the 
encoding/decoding and writing/reading directly from HBase?


Thanks,

Brandon

*From: *Jaanai Zhang 
*Reply-To: *
*Date: *Sunday, August 5, 2018 at 9:34 PM
*To: *
*Subject: *Re: Spark-Phoenix Plugin

You can get data type from Phoenix meta, then encode/decode data to 
write/read data. I think this way is effective, FYI :)





    Yun Zhang

    Best regards!

2018-08-04 21:43 GMT+08:00 Brandon Geise >:


Good morning,

I’m looking at using a combination of Hbase, Phoenix and Spark for a
project and read that using the Spark-Phoenix plugin directly is
more efficient than JDBC, however it wasn’t entirely clear from
examples when writing a dataframe if an upsert is performed and how
much fine-grained options there are for executing the upsert.  Any
information someone can share would be greatly appreciated!

Thanks,

Brandon



Re: Spark-Phoenix Plugin

2018-08-06 Thread Brandon Geise
Thanks for the reply Yun.  

 

I’m not quite clear how this would exactly help on the upsert side?  Are you 
suggesting deriving the type from Phoenix then doing the encoding/decoding and 
writing/reading directly from HBase?

 

Thanks,

Brandon

 

From: Jaanai Zhang 
Reply-To: 
Date: Sunday, August 5, 2018 at 9:34 PM
To: 
Subject: Re: Spark-Phoenix Plugin

 

You can get data type from Phoenix meta, then encode/decode data to write/read 
data. I think this way is effective, FYI :)


 



   Yun Zhang

   Best regards!

 

 

2018-08-04 21:43 GMT+08:00 Brandon Geise :

Good morning,

 

I’m looking at using a combination of Hbase, Phoenix and Spark for a project 
and read that using the Spark-Phoenix plugin directly is more efficient than 
JDBC, however it wasn’t entirely clear from examples when writing a dataframe 
if an upsert is performed and how much fine-grained options there are for 
executing the upsert.  Any information someone can share would be greatly 
appreciated!

 

 

Thanks,

Brandon