Re: Probably issue of DEFAULT column value when creating table

2018-11-14 Thread gengxin
Hi Francis,

It worked, thanks!

> 在 2018年11月15日,13:59,Francis Chuang  写道:
> 
> Can you try enclosing the string with single quotes (haven't tried this 
> myself as I currently don't have access to my Phoenix test cluster):
> 
> CREATE TABLE TEST (
> a BIGINT NOT NULL DEFAULT 0,
> b CHAR(10) DEFAULT 'abc',
> cf.c INTEGER DEFAULT 1
> CONSTRAINT pk PRIMARY KEY (a ASC, b ASC)
> );
> 
> On 15/11/2018 4:38 pm, xin geng wrote:
>> Hi, all
>> 
>> I'm learning phoenix, when trying to create table with the sql below, I met 
>> a ColumnNotFoundException, which probably be a issue of phoenix. Please 
>> correct me if I'm wrong. :)
>> 
>> SQL:
>> CREATE TABLE TEST (
>> a BIGINT NOT NULL DEFAULT 0,
>> b CHAR(10) DEFAULT "abc",
>> cf.c INTEGER DEFAULT 1
>> CONSTRAINT pk PRIMARY KEY (a ASC, b ASC)
>> );
>> 
>> Exeption:
>> Error: ERROR 504 (42703): Undefined column. columnName=abc 
>> (state=42703,code=504)
>> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
>> Undefined column. columnName=abc
>> at 
>> org.apache.phoenix.compile.FromCompiler$1.resolveColumn(FromCompiler.java:129)
>> at 
>> org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
>> at 
>> org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:408)
>> at 
>> org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:146)
>> at org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
>> at org.apache.phoenix.parse.ColumnDef.validateDefault(ColumnDef.java:246)
>> at 
>> org.apache.phoenix.compile.CreateTableCompiler.compile(CreateTableCompiler.java:108)
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateTableStatement.compilePlan(PhoenixStatement.java:788)
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateTableStatement.compilePlan(PhoenixStatement.java:777)
>> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:387)
>> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377)
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:364)
>> at 
>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1738)
>> at sqlline.Commands.execute(Commands.java:822)
>> at sqlline.Commands.sql(Commands.java:732)
>> at sqlline.SqlLine.dispatch(SqlLine.java:813)
>> at sqlline.SqlLine.begin(SqlLine.java:686)
>> at sqlline.SqlLine.start(SqlLine.java:398)
>> at sqlline.SqlLine.main(SqlLine.java:291)
>> 
>> I'm using phoenix4.13.1 and hbase1.2.5.
> 
> 



Re: Probably issue of DEFAULT column value when creating table

2018-11-14 Thread Francis Chuang
Can you try enclosing the string with single quotes (haven't tried this 
myself as I currently don't have access to my Phoenix test cluster):


CREATE TABLE TEST (
a BIGINT NOT NULL DEFAULT 0,
b CHAR(10) DEFAULT 'abc',
cf.c INTEGER DEFAULT 1
CONSTRAINT pk PRIMARY KEY (a ASC, b ASC)
);

On 15/11/2018 4:38 pm, xin geng wrote:

Hi, all

I'm learning phoenix, when trying to create table with the sql below, 
I met a ColumnNotFoundException, which probably be a issue of phoenix. 
Please correct me if I'm wrong. :)


SQL:
CREATE TABLE TEST (
a BIGINT NOT NULL DEFAULT 0,
b CHAR(10) DEFAULT "abc",
cf.c INTEGER DEFAULT 1
CONSTRAINT pk PRIMARY KEY (a ASC, b ASC)
);

Exeption:
Error: ERROR 504 (42703): Undefined column. columnName=abc 
(state=42703,code=504)
org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
Undefined column. columnName=abc
at 
org.apache.phoenix.compile.FromCompiler$1.resolveColumn(FromCompiler.java:129)
at 
org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:408)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:146)
at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)

at org.apache.phoenix.parse.ColumnDef.validateDefault(ColumnDef.java:246)
at 
org.apache.phoenix.compile.CreateTableCompiler.compile(CreateTableCompiler.java:108)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateTableStatement.compilePlan(PhoenixStatement.java:788)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateTableStatement.compilePlan(PhoenixStatement.java:777)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:387)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:364)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1738)

at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)

I'm using phoenix4.13.1 and hbase1.2.5.





Probably issue of DEFAULT column value when creating table

2018-11-14 Thread xin geng

Hi, all

I'm learning phoenix, when trying to create table with the sql below, I met a 
ColumnNotFoundException, which probably be a issue of phoenix. Please correct 
me if I'm wrong. :)

SQL:
CREATE TABLE TEST (
a BIGINT NOT NULL DEFAULT 0,
b CHAR(10) DEFAULT "abc",
cf.c INTEGER DEFAULT 1
CONSTRAINT pk PRIMARY KEY (a ASC, b ASC)
);

Exeption:
Error: ERROR 504 (42703): Undefined column. columnName=abc 
(state=42703,code=504)
org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): Undefined 
column. columnName=abc
at 
org.apache.phoenix.compile.FromCompiler$1.resolveColumn(FromCompiler.java:129)
at 
org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:372)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:408)
at 
org.apache.phoenix.compile.ExpressionCompiler.visit(ExpressionCompiler.java:146)
at org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
at org.apache.phoenix.parse.ColumnDef.validateDefault(ColumnDef.java:246)
at 
org.apache.phoenix.compile.CreateTableCompiler.compile(CreateTableCompiler.java:108)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateTableStatement.compilePlan(PhoenixStatement.java:788)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableCreateTableStatement.compilePlan(PhoenixStatement.java:777)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:387)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:375)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:364)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1738)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)

I'm using phoenix4.13.1 and hbase1.2.5.

[ANNOUNCE] Apache Phoenix 4.14.1 released

2018-11-14 Thread Vincent Poon
The Apache Phoenix team is pleased to announce the immediate availability
of the 4.14.1 patch release. Apache Phoenix enables SQL-based OLTP and
operational analytics for Apache Hadoop using Apache HBase as its backing
store and providing integration with other projects in the Apache ecosystem
such as Spark, Hive, Pig, Flume, and MapReduce.

This patch release has feature parity with supported HBase versions and
includes critical bug fixes for secondary indexes.

Download source and binaries here [1].

Thanks,
Vincent (on behalf of the Apache Phoenix team)

[1] http://phoenix.apache.org/download.html


Re: Phoenix Query Taking Long Time to Execute

2018-11-14 Thread Thomas D'Silva
Can you describe your cluster setup and table definitions, types of queries
you are running etc.?


On Wed, Nov 14, 2018 at 12:40 AM, Azharuddin Shaikh <
azharuddins...@gmail.com> wrote:

> Hi All,
>
> We have hbase tables which consist of 4.4 Million records on which we are
> performing query using phoenix. Initially we were getting result within 10
> secs but the same query is now taking more than 30 secs to fetch the
> results due to which are application are getting timeout errors.
>
> Request you to please let us know if we need to Tune any hbase/phoenix
> parameter to reduce the time taken to fetch the result. How can we
> improvise on the performance of hbase/phoenix.
>
> We have Hbase 1.2.3 version and Phoenix 4.12 version.
>
> Your help is greatly appreciated.
>
> Thanks,
>
> Azhar
>


Re: Phoenix 4.14 - VIEW creation

2018-11-14 Thread Thomas D'Silva
You cannot create a view over multiple tables.

On Wed, Nov 14, 2018 at 3:49 AM, lkyaes  wrote:

> Hello,
>
> I wonder, if there already  some  way how to CREATE VIEW over multiply
> tables (with aggregation)?
>
> Br,
> Liubov
>
>
>
>


Re: Regarding upgrading from 4.7 to 4.14

2018-11-14 Thread Tanvi Bhandari
Hi Abhishek,

As part of upgrading from phoenix 4.6 to phoenix 4.14 we also faced many
issues.
and yes, column encoding is one of the reason because of which we were not
able to see the data in phoenix tables. We followed the steps listed in

*http://apache-phoenix-user-list.1124778.n5.nabble.com/Issue-in-upgrading-phoenix-java-lang-ArrayIndexOutOfBoundsException-SYSTEM-CATALOG-63-tp4768p4780.html
*

Even after doing all of that we were seeing new issues in upgraded instance:
1) count of index table and original table were not matching.
2) select queries with "col_name = " were not returning any record for few
of the rows, even though we can see the record exists.
3) Delete queries with "col_name = " and col_name LIKE queries were also
not able to delete few of the records.

Finally, we decided to migrate the data to new tables. which is working
fine.

Thanks,
Tanvi

On Wed, Nov 14, 2018 at 8:04 PM talluri abhishek 
wrote:

> Hi All,
>
> We are upgrading from Phoenix 4.7 to 4.14 and observed that data is not
> directly available in Phoenix after the upgrade, though the underlying
> old hbase tables still hold the data.
> Is it because of the column name encoding that's introduced after 4.8 and
> Is there any easier way to migrate the data from older versions to a newer
> version (>= 4.8 and < 4.14) without having to re-insert data?
> I am using CDH parcels to upgrade and do we still need to upgrade to a
> maximum of two version as stated in http://phoenix.apache.org/upgrading.html
> or Is it okay to directly upgrade to 4.14? Any known issues in doing so?
>
> Thanks,
> Abhishek
>


Re: Regarding upgrading from 4.7 to 4.14

2018-11-14 Thread Pedro Boado
Have you tried disabling column name mapping either globally or in a per
table basis? Column names are stored in every cell so there is no direct
workaround but disabling it.



On Wed, 14 Nov 2018, 15:34 talluri abhishek  Hi All,
>
> We are upgrading from Phoenix 4.7 to 4.14 and observed that data is not
> directly available in Phoenix after the upgrade, though the underlying
> old hbase tables still hold the data.
> Is it because of the column name encoding that's introduced after 4.8 and
> Is there any easier way to migrate the data from older versions to a newer
> version (>= 4.8 and < 4.14) without having to re-insert data?
> I am using CDH parcels to upgrade and do we still need to upgrade to a
> maximum of two version as stated in http://phoenix.apache.org/upgrading.html
> or Is it okay to directly upgrade to 4.14? Any known issues in doing so?
>
> Thanks,
> Abhishek
>


Regarding upgrading from 4.7 to 4.14

2018-11-14 Thread talluri abhishek
Hi All,

We are upgrading from Phoenix 4.7 to 4.14 and observed that data is not
directly available in Phoenix after the upgrade, though the underlying
old hbase tables still hold the data.
Is it because of the column name encoding that's introduced after 4.8 and
Is there any easier way to migrate the data from older versions to a newer
version (>= 4.8 and < 4.14) without having to re-insert data?
I am using CDH parcels to upgrade and do we still need to upgrade to a
maximum of two version as stated in http://phoenix.apache.org/upgrading.html
or Is it okay to directly upgrade to 4.14? Any known issues in doing so?

Thanks,
Abhishek


Phoenix 4.14 - VIEW creation

2018-11-14 Thread lkyaes
Hello,

I wonder, if there already  some  way how to CREATE VIEW over multiply
tables (with aggregation)?

Br,
Liubov


Phoenix Query Taking Long Time to Execute

2018-11-14 Thread Azharuddin Shaikh
Hi All,

We have hbase tables which consist of 4.4 Million records on which we are
performing query using phoenix. Initially we were getting result within 10
secs but the same query is now taking more than 30 secs to fetch the
results due to which are application are getting timeout errors.

Request you to please let us know if we need to Tune any hbase/phoenix
parameter to reduce the time taken to fetch the result. How can we
improvise on the performance of hbase/phoenix.

We have Hbase 1.2.3 version and Phoenix 4.12 version.

Your help is greatly appreciated.

Thanks,

Azhar