Re: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-13 Thread 倪项菲


Hi Ankit,




   I have put phoenix-4.14.0-HBase-1.2-server.jar to /opt/hbase-1.2.6/lib,is 
this correct?











 



发件人: Ankit Singhal

时间: 2018/08/14(星期二)02:25

收件人: user;

主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 
1.2.6

Skipping sanity checks may unstabilize the functionality on which Phoenix 
relies on, SplitPolicy should have been loaded to prevent splitting of 
SYSTEM.CATALOG table, so to actually fix the issue please check if you have 
right phoenix-server.jar in HBase classpath

"Unable to load configured region split policy 
'org.apache.phoenix.schema.MetaDataSplitPolicy' for table 'SYSTEM.CATALOG' Set 
hbase.table.sanity.checks to false at conf or table descriptor if you want to 
bypass sanity checks"





Regards,

Ankit Singhal




On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲  wrote:




Thanks all.




at last I set hbase.table.sanity.checks to false in hbase-site.xml and restart 
hbase cluster,it works.











 



发件人: Josh Elser

时间: 2018/08/07(星期二)20:58

收件人: user;





主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6



"Phoenix-server" refers to the phoenix-$VERSION-server.jar that is 
either included in the binary tarball or is generated by the official 
source-release.

"Deploying" it means copying the jar to $HBASE_HOME/lib.

On 8/6/18 9:56 PM, 倪项菲 wrote:
> 
> Hi Zhang Yun,
>  the link you mentioned tells us to add the phoenix jar to  hbase 
> lib directory,it doesn't tell us how to deploy the phoenix server.
> 
> 发件人: Jaanai Zhang 
> 时间: 2018/08/07(星期二)09:36
> 收件人: user ;
> 主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> with hbase 1.2.6
> 
> reference link: http://phoenix.apache.org/installation.html
> 
> 
> 
> Yun Zhang
> Best regards!
> 
> 
> 2018-08-07 9:30 GMT+08:00 倪项菲  >:
> 
> Hi Zhang Yun,
>  how to deploy the Phoenix server?I just have the infomation
> from phoenix website,it doesn't mention the phoenix server
> 
> 发件人: Jaanai Zhang 
> 时间: 2018/08/07(星期二)09:16
> 收件人: user ;
> 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> with hbase 1.2.6
> 
> Please ensure your Phoenix server was deployed and had resarted
> 
> 
> 
> Yun Zhang
> Best regards!
> 
> 
> 2018-08-07 9:10 GMT+08:00 倪项菲  >:
> 
> 
> Hi Experts,
>  I am using HBase 1.2.6,the cluster is working good with
> HMaster HA,but when we integrate phoenix with hbase,it
> failed,below are the steps
>  1,download apache-phoenix-4.14.0-HBase-1.2-bin from
> http://phoenix.apache.org,the copy the tar file to the HMaster
> and unzip the file
> 
> 2,copy phoenix-core-4.14.0-HBase-1.2.jar 
> phoenix-4.14.0-HBase-1.2-server.jar
> to all HBase nodes including HMaster and HRegionServer ,put them
> to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
>  3,restart hbase cluster
>  4,then start to use phoenix,but it return below error:
> [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
> 
> plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
> Setting property: [incremental, false]
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none
> none org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to
> 
> jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,plat-ecloud01-bigdata-zk03
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> 
> [jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> 
> [jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
>  for an
> explanation.
> 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
> native-hadoop library for your platform... using builtin-java
> classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to
> load configured region split policy
> 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table
> 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
> or table descriptor if you want to bypass sanity checks
>  

Re: Null array elements with joins

2018-08-13 Thread James Taylor
I commented on the JIRA you filed here: PHOENIX-4791. Best to keep
discussion there.
Thanks,
James

On Mon, Aug 13, 2018 at 11:08 AM, Gerald Sangudi 
wrote:

> Hello all,
>
> Any suggestions or pointers on the issue below?
>
> Projecting array elements works when not using joins, and does not work
> when we use hash joins. Is there an issue with the ProjectionCompiler for
> joins? I have not been able to isolate the specific cause, and would
> appreciate any pointers or suggestions.
>
> Thanks,
> Gerald
>
> On Tue, Jun 19, 2018 at 10:02 AM, Tulasi Paradarami <
> tulasi.krishn...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm running few tests against Phoenix array and running into this bug
>> where array elements return null values when a join is involved. Is this a
>> known issue/limitation of arrays?
>>
>> create table array_test_1 (id integer not null primary key, arr
>> tinyint[5]);
>> upsert into array_test_1 values (1001, array[0, 0, 0, 0, 0]);
>> upsert into array_test_1 values (1002, array[0, 0, 0, 0, 1]);
>> upsert into array_test_1 values (1003, array[0, 0, 0, 1, 1]);
>> upsert into array_test_1 values (1004, array[0, 0, 1, 1, 1]);
>> upsert into array_test_1 values (1005, array[1, 1, 1, 1, 1]);
>>
>> create table test_table_1 (id integer not null primary key, val varchar);
>> upsert into test_table_1 values (1001, 'abc');
>> upsert into test_table_1 values (1002, 'def');
>> upsert into test_table_1 values (1003, 'ghi');
>>
>> 0: jdbc:phoenix:localhost> select t1.id, t2.val, t1.arr[1], t1.arr[2],
>> t1.arr[3] from array_test_1 as t1 join test_table_1 as t2 on t1.id =
>> t2.id;
>> ++-++---
>> -++
>> | T1.ID  | T2.VAL  | ARRAY_ELEM(T1.ARR, 1)  | ARRAY_ELEM(T1.ARR, 2)  |
>> ARRAY_ELEM(T1.ARR, 3)  |
>> ++-++---
>> -++
>> | 1001   | abc | null   | null   |
>> null   |
>> | 1002   | def | null   | null   |
>> null   |
>> | 1003   | ghi | null   | null   |
>> null   |
>> ++-++---
>> -++
>> 3 rows selected (0.056 seconds)
>>
>> However, directly selecting array elements from the array returns data
>> correctly.
>> 0: jdbc:phoenix:localhost> select t1.id, t1.arr[1], t1.arr[2], t1.arr[3]
>> from array_test_1 as t1;
>> +---+-+-+---
>> --+
>> |  ID   | ARRAY_ELEM(ARR, 1)  | ARRAY_ELEM(ARR, 2)  | ARRAY_ELEM(ARR, 3)
>> |
>> +---+-+-+---
>> --+
>> | 1001  | 0   | 0   | 0
>>  |
>> | 1002  | 0   | 0   | 0
>>  |
>> | 1003  | 0   | 0   | 0
>>  |
>> | 1004  | 0   | 0   | 1
>>  |
>> | 1005  | 1   | 1   | 1
>>  |
>> +---+-+-+---
>> --+
>> 5 rows selected (0.044 seconds)
>>
>>
>>
>


Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase 1.2.6

2018-08-13 Thread Ankit Singhal
Skipping sanity checks may unstabilize the functionality on which Phoenix
relies on, SplitPolicy should have been loaded to prevent splitting of
SYSTEM.CATALOG table, so to actually fix the issue please check if you have
right phoenix-server.jar in HBase classpath

"Unable to load configured region split policy
'org.apache.phoenix.schema.MetaDataSplitPolicy'
for table 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
or table descriptor if you want to bypass sanity checks"

Regards,
Ankit Singhal

On Sun, Aug 12, 2018 at 6:46 PM, 倪项菲  wrote:

> Thanks all.
>
> at last I set hbase.table.sanity.checks to false in hbase-site.xml and
> restart hbase cluster,it works.
>
>
>
> 发件人: Josh Elser 
> 时间: 2018/08/07(星期二)20:58
> 收件人: user ;
> 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin with hbase
> 1.2.6
>
> "Phoenix-server" refers to the phoenix-$VERSION-server.jar that is
> either included in the binary tarball or is generated by the official
> source-release.
>
> "Deploying" it means copying the jar to $HBASE_HOME/lib.
>
> On 8/6/18 9:56 PM, 倪项菲 wrote:
> >
> > Hi Zhang Yun,
> > the link you mentioned tells us to add the phoenix jar to  hbase
> > lib directory,it doesn't tell us how to deploy the phoenix server.
> >
> > 发件人: Jaanai Zhang 
> > 时间: 2018/08/07(星期二)09:36
> > 收件人: user ;
> > 主题: Re: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> > with hbase 1.2.6
> >
> > reference link: http://phoenix.apache.org/installation.html
> >
> >
> > 
> >Yun Zhang
> >Best regards!
> >
> >
> > 2018-08-07 9:30 GMT+08:00 倪项菲  > >:
> >
> > Hi Zhang Yun,
> > how to deploy the Phoenix server?I just have the infomation
> > from phoenix website,it doesn't mention the phoenix server
> >
> > 发件人: Jaanai Zhang 
> > 时间: 2018/08/07(星期二)09:16
> > 收件人: user ;
> > 主题: Re: error when using apache-phoenix-4.14.0-HBase-1.2-bin
> > with hbase 1.2.6
> >
> > Please ensure your Phoenix server was deployed and had resarted
> >
> >
> > 
> >Yun Zhang
> >Best regards!
> >
> >
> > 2018-08-07 9:10 GMT+08:00 倪项菲  > >:
> >
> >
> > Hi Experts,
> > I am using HBase 1.2.6,the cluster is working good with
> > HMaster HA,but when we integrate phoenix with hbase,it
> > failed,below are the steps
> > 1,download apache-phoenix-4.14.0-HBase-1.2-bin from
> > http://phoenix.apache.org,the copy the tar file to the HMaster
> > and unzip the file
> >
> > 2,copy phoenix-core-4.14.0-HBase-1.2.jar phoenix-4.14.0-
> HBase-1.2-server.jar
> > to all HBase nodes including HMaster and HRegionServer ,put them
> > to hbasehome/lib,my path is /opt/hbase-1.2.6/lib
> > 3,restart hbase cluster
> > 4,then start to use phoenix,but it return below error:
> > [apache@plat-ecloud01-bigdata-journalnode01 bin]$ ./sqlline.py
> > plat-ecloud01-bigdata-zk01,plat-ecloud01-bigdata-zk02,
> plat-ecloud01-bigdata-zk03
> > Setting property: [incremental, false]
> > Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> > issuing: !connect jdbc:phoenix:plat-ecloud01-bigdata-zk01 none
> > none org.apache.phoenix.jdbc.PhoenixDriver
> > Connecting to
> > jdbc:phoenix:plat-ecloud01-bigdata-zk01,plat-ecloud01-
> bigdata-zk02,plat-ecloud01-bigdata-zk03
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> > [jar:file:/opt/apache-phoenix-4.14.0-HBase-1.2-bin/phoenix-
> 4.14.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> > [jar:file:/opt/hadoop-2.7.6/share/hadoop/common/lib/slf4j-
> log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings
> >  for an
> > explanation.
> > 18/08/06 18:40:08 WARN util.NativeCodeLoader: Unable to load
> > native-hadoop library for your platform... using builtin-java
> > classes where applicable
> > Error: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to
> > load configured region split policy
> > 'org.apache.phoenix.schema.MetaDataSplitPolicy' for table
> > 'SYSTEM.CATALOG' Set hbase.table.sanity.checks to false at conf
> > or table descriptor if you want to bypass sanity checks
> > at
> > org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure
> (HMaster.java:1754)
> > at
> > org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(
> HMaster.java:1615)
> > at
> > org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1541)
> > at
> > org.apache.hadoop.hbase.master.MasterRpcServices.
> createTable(MasterRpcServices.java:463)
> > at
> > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.
> 

Re: Null array elements with joins

2018-08-13 Thread Gerald Sangudi
Hello all,

Any suggestions or pointers on the issue below?

Projecting array elements works when not using joins, and does not work
when we use hash joins. Is there an issue with the ProjectionCompiler for
joins? I have not been able to isolate the specific cause, and would
appreciate any pointers or suggestions.

Thanks,
Gerald

On Tue, Jun 19, 2018 at 10:02 AM, Tulasi Paradarami <
tulasi.krishn...@gmail.com> wrote:

> Hi,
>
> I'm running few tests against Phoenix array and running into this bug
> where array elements return null values when a join is involved. Is this a
> known issue/limitation of arrays?
>
> create table array_test_1 (id integer not null primary key, arr
> tinyint[5]);
> upsert into array_test_1 values (1001, array[0, 0, 0, 0, 0]);
> upsert into array_test_1 values (1002, array[0, 0, 0, 0, 1]);
> upsert into array_test_1 values (1003, array[0, 0, 0, 1, 1]);
> upsert into array_test_1 values (1004, array[0, 0, 1, 1, 1]);
> upsert into array_test_1 values (1005, array[1, 1, 1, 1, 1]);
>
> create table test_table_1 (id integer not null primary key, val varchar);
> upsert into test_table_1 values (1001, 'abc');
> upsert into test_table_1 values (1002, 'def');
> upsert into test_table_1 values (1003, 'ghi');
>
> 0: jdbc:phoenix:localhost> select t1.id, t2.val, t1.arr[1], t1.arr[2],
> t1.arr[3] from array_test_1 as t1 join test_table_1 as t2 on t1.id = t2.id
> ;
> ++-++---
> -++
> | T1.ID  | T2.VAL  | ARRAY_ELEM(T1.ARR, 1)  | ARRAY_ELEM(T1.ARR, 2)  |
> ARRAY_ELEM(T1.ARR, 3)  |
> ++-++---
> -++
> | 1001   | abc | null   | null   |
> null   |
> | 1002   | def | null   | null   |
> null   |
> | 1003   | ghi | null   | null   |
> null   |
> ++-++---
> -++
> 3 rows selected (0.056 seconds)
>
> However, directly selecting array elements from the array returns data
> correctly.
> 0: jdbc:phoenix:localhost> select t1.id, t1.arr[1], t1.arr[2], t1.arr[3]
> from array_test_1 as t1;
> +---+-+-+---
> --+
> |  ID   | ARRAY_ELEM(ARR, 1)  | ARRAY_ELEM(ARR, 2)  | ARRAY_ELEM(ARR, 3)  |
> +---+-+-+---
> --+
> | 1001  | 0   | 0   | 0   |
> | 1002  | 0   | 0   | 0   |
> | 1003  | 0   | 0   | 0   |
> | 1004  | 0   | 0   | 1   |
> | 1005  | 1   | 1   | 1   |
> +---+-+-+---
> --+
> 5 rows selected (0.044 seconds)
>
>
>