[jira] [Created] (PHOENIX-6130) StatementContext.subqueryResults should be thread safe

2020-09-13 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created PHOENIX-6130:
-

 Summary: StatementContext.subqueryResults should be thread safe
 Key: PHOENIX-6130
 URL: https://issues.apache.org/jira/browse/PHOENIX-6130
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Steps to reproduce this issue are as follows:

1. Create a table:
{code:java}
create table test (
  id varchar primary key,
  ts varchar
);
{code}
2. Upsert a row into the table created in the step 1:
{code:java}
upsert into test values ('id', '159606720');
{code}
3. The following query should always return the upserted row in the step 2, but 
sometimes it returns nothing:
{code:java}
0: jdbc:phoenix:> select ts from test where ts <= (select 
to_char(cast(to_number(to_date('2020-07-30 00:00:00')) as BIGINT), 
'#')) and ts >= (select to_char(cast(to_number(to_date('2020-07-29 
00:00:00')) as BIGINT), '#'));
+-+
| TS  |
+-+
+-+
No rows selected (0.015 seconds)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6098) IndexPredicateAnalyzer wrongly handles pushdown predicates and residual predicates

2020-08-25 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-6098:
--
Description: 
Currently, the following code of IndexPredicateAnalyzer is assuming that 
GenericUDFOPAnd always has 2 children nodes. I think this is wrong and it leads 
wrong results:
https://github.com/apache/phoenix-connectors/blob/5bd23ae2a0f70c3b3edf92a53780dafa643faf26/phoenix-hive3/src/main/java/org/apache/phoenix/hive/ql/index/IndexPredicateAnalyzer.java#L354-L363

  was:
Currently, the following code of IndexPredicateAnalyzer is assuming that 
GenericUDFOPAnd always has 2 children nodes. I think this is wrong and it leads 
wrong results:
https://github.com/apache/phoenix-connectors/blob/5bd23ae2a0f70c3b3edf92a53780dafa643faf26/phoenix-hive/src/main/java/org/apache/phoenix/hive/ql/index/IndexPredicateAnalyzer.java#L346-L359


> IndexPredicateAnalyzer wrongly handles pushdown predicates and residual 
> predicates
> --
>
> Key: PHOENIX-6098
> URL: https://issues.apache.org/jira/browse/PHOENIX-6098
> Project: Phoenix
>  Issue Type: Bug
>  Components: hive-connector
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: connectors-6.0.0
>
>
> Currently, the following code of IndexPredicateAnalyzer is assuming that 
> GenericUDFOPAnd always has 2 children nodes. I think this is wrong and it 
> leads wrong results:
> https://github.com/apache/phoenix-connectors/blob/5bd23ae2a0f70c3b3edf92a53780dafa643faf26/phoenix-hive3/src/main/java/org/apache/phoenix/hive/ql/index/IndexPredicateAnalyzer.java#L354-L363



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6098) IndexPredicateAnalyzer wrongly handles pushdown predicates and residual predicates

2020-08-24 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created PHOENIX-6098:
-

 Summary: IndexPredicateAnalyzer wrongly handles pushdown 
predicates and residual predicates
 Key: PHOENIX-6098
 URL: https://issues.apache.org/jira/browse/PHOENIX-6098
 Project: Phoenix
  Issue Type: Bug
  Components: hive-connector
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Currently, the following code of IndexPredicateAnalyzer is assuming that 
GenericUDFOPAnd always has 2 children nodes. I think this is wrong and it leads 
wrong results:
https://github.com/apache/phoenix-connectors/blob/5bd23ae2a0f70c3b3edf92a53780dafa643faf26/phoenix-hive/src/main/java/org/apache/phoenix/hive/ql/index/IndexPredicateAnalyzer.java#L346-L359



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6023) Wrong result when issuing query for an immutable table with multiple column families

2020-07-23 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-6023:
--
Attachment: PHOENIX-6023-addendum.master.v1.patch

> Wrong result when issuing query for an immutable table with multiple column 
> families
> 
>
> Key: PHOENIX-6023
> URL: https://issues.apache.org/jira/browse/PHOENIX-6023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6023-addendum.master.v1.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Steps to reproduce are as follows:
> 1. Create an immutable table with multiple column families:
> {code}
> 0: jdbc:phoenix:> CREATE TABLE TEST (
> . . . . . . . . >   ID VARCHAR PRIMARY KEY,
> . . . . . . . . >   A.COL1 VARCHAR,
> . . . . . . . . >   B.COL2 VARCHAR
> . . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
> No rows affected (1.182 seconds)
> {code}
> 2. Upsert some rows:
> {code}
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
> 1 row affected (0.138 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
> 1 row affected (0.011 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
> 1 row affected (0.006 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
> 1 row affected (0.009 seconds)
> {code}
> 3. Count query is okay:
> {code}
> 0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
> ++
> | COUNT(A.COL1)  |
> ++
> | 5  |
> ++
> 1 row selected (0.1 seconds)
> {code}
> 4. However, the following select query returns wrong result (it should return 
> 5 records):
> {code}
> 0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
> +---+
> | COL1  |
> +---+
> | 0 |
> | 1 |
> | 2 |
> | 3 |
> | 4 |
> | 5 |
> | 6 |
> | 7 |
> | 8 |
> | 9 |
> +---+
> 10 rows selected (0.058 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6023) Wrong result when issuing query for an immutable table with multiple column families

2020-07-23 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-6023:
--
Attachment: (was: PHOENIX-6023-addendum.master.v1.txt)

> Wrong result when issuing query for an immutable table with multiple column 
> families
> 
>
> Key: PHOENIX-6023
> URL: https://issues.apache.org/jira/browse/PHOENIX-6023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6023-addendum.master.v1.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Steps to reproduce are as follows:
> 1. Create an immutable table with multiple column families:
> {code}
> 0: jdbc:phoenix:> CREATE TABLE TEST (
> . . . . . . . . >   ID VARCHAR PRIMARY KEY,
> . . . . . . . . >   A.COL1 VARCHAR,
> . . . . . . . . >   B.COL2 VARCHAR
> . . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
> No rows affected (1.182 seconds)
> {code}
> 2. Upsert some rows:
> {code}
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
> 1 row affected (0.138 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
> 1 row affected (0.011 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
> 1 row affected (0.006 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
> 1 row affected (0.009 seconds)
> {code}
> 3. Count query is okay:
> {code}
> 0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
> ++
> | COUNT(A.COL1)  |
> ++
> | 5  |
> ++
> 1 row selected (0.1 seconds)
> {code}
> 4. However, the following select query returns wrong result (it should return 
> 5 records):
> {code}
> 0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
> +---+
> | COL1  |
> +---+
> | 0 |
> | 1 |
> | 2 |
> | 3 |
> | 4 |
> | 5 |
> | 6 |
> | 7 |
> | 8 |
> | 9 |
> +---+
> 10 rows selected (0.058 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6023) Wrong result when issuing query for an immutable table with multiple column families

2020-07-23 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-6023:
--
Attachment: PHOENIX-6023-addendum.master.v1.txt

> Wrong result when issuing query for an immutable table with multiple column 
> families
> 
>
> Key: PHOENIX-6023
> URL: https://issues.apache.org/jira/browse/PHOENIX-6023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6023-addendum.master.v1.txt
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Steps to reproduce are as follows:
> 1. Create an immutable table with multiple column families:
> {code}
> 0: jdbc:phoenix:> CREATE TABLE TEST (
> . . . . . . . . >   ID VARCHAR PRIMARY KEY,
> . . . . . . . . >   A.COL1 VARCHAR,
> . . . . . . . . >   B.COL2 VARCHAR
> . . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
> No rows affected (1.182 seconds)
> {code}
> 2. Upsert some rows:
> {code}
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
> 1 row affected (0.138 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
> 1 row affected (0.011 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
> 1 row affected (0.006 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
> 1 row affected (0.009 seconds)
> {code}
> 3. Count query is okay:
> {code}
> 0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
> ++
> | COUNT(A.COL1)  |
> ++
> | 5  |
> ++
> 1 row selected (0.1 seconds)
> {code}
> 4. However, the following select query returns wrong result (it should return 
> 5 records):
> {code}
> 0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
> +---+
> | COL1  |
> +---+
> | 0 |
> | 1 |
> | 2 |
> | 3 |
> | 4 |
> | 5 |
> | 6 |
> | 7 |
> | 8 |
> | 9 |
> +---+
> 10 rows selected (0.058 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-6023) Wrong result when issuing query for an immutable table with multiple column families

2020-07-22 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki reopened PHOENIX-6023:
---

> Wrong result when issuing query for an immutable table with multiple column 
> families
> 
>
> Key: PHOENIX-6023
> URL: https://issues.apache.org/jira/browse/PHOENIX-6023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce are as follows:
> 1. Create an immutable table with multiple column families:
> {code}
> 0: jdbc:phoenix:> CREATE TABLE TEST (
> . . . . . . . . >   ID VARCHAR PRIMARY KEY,
> . . . . . . . . >   A.COL1 VARCHAR,
> . . . . . . . . >   B.COL2 VARCHAR
> . . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
> No rows affected (1.182 seconds)
> {code}
> 2. Upsert some rows:
> {code}
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
> 1 row affected (0.138 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
> 1 row affected (0.011 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
> 1 row affected (0.006 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
> 1 row affected (0.009 seconds)
> {code}
> 3. Count query is okay:
> {code}
> 0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
> ++
> | COUNT(A.COL1)  |
> ++
> | 5  |
> ++
> 1 row selected (0.1 seconds)
> {code}
> 4. However, the following select query returns wrong result (it should return 
> 5 records):
> {code}
> 0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
> +---+
> | COL1  |
> +---+
> | 0 |
> | 1 |
> | 2 |
> | 3 |
> | 4 |
> | 5 |
> | 6 |
> | 7 |
> | 8 |
> | 9 |
> +---+
> 10 rows selected (0.058 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6023) Wrong result when issuing query for an immutable table with multiple column families

2020-07-17 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-6023:
--
Description: 
Steps to reproduce are as follows:

1. Create an immutable table with multiple column families:
{code}
0: jdbc:phoenix:> CREATE TABLE TEST (
. . . . . . . . >   ID VARCHAR PRIMARY KEY,
. . . . . . . . >   A.COL1 VARCHAR,
. . . . . . . . >   B.COL2 VARCHAR
. . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
No rows affected (1.182 seconds)
{code}

2. Upsert some rows:
{code}
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
1 row affected (0.138 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
1 row affected (0.009 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
1 row affected (0.011 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
1 row affected (0.006 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
1 row affected (0.009 seconds)
{code}

3. Count query is okay:
{code}
0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
++
| COUNT(A.COL1)  |
++
| 5  |
++
1 row selected (0.1 seconds)
{code}

4. However, the following select query returns wrong result (it should return 5 
records):
{code}
0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
+---+
| COL1  |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
10 rows selected (0.058 seconds)
{code}

  was:
Steps to reproduce are as follows:

1. Create an immutable table with multiple column families:
{code}
0: jdbc:phoenix:> CREATE TABLE TEST (
. . . . . . . . >   ID VARCHAR PRIMARY KEY,
. . . . . . . . >   A.COL1 VARCHAR,
. . . . . . . . >   B.COL2 VARCHAR
. . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
No rows affected (1.182 seconds)
{code}

2. Upsert some rows:
{code}
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
1 row affected (0.138 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
1 row affected (0.009 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
1 row affected (0.011 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
1 row affected (0.006 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
1 row affected (0.009 seconds)
{code}

3. Count query is okay:
{code}
0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
++
| COUNT(A.COL1)  |
++
| 5  |
++
1 row selected (0.1 seconds)
{code}

4. However, the following select query returns wrong result:
{code}
0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
+---+
| COL1  |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
10 rows selected (0.058 seconds)
{code}


> Wrong result when issuing query for an immutable table with multiple column 
> families
> 
>
> Key: PHOENIX-6023
> URL: https://issues.apache.org/jira/browse/PHOENIX-6023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> Steps to reproduce are as follows:
> 1. Create an immutable table with multiple column families:
> {code}
> 0: jdbc:phoenix:> CREATE TABLE TEST (
> . . . . . . . . >   ID VARCHAR PRIMARY KEY,
> . . . . . . . . >   A.COL1 VARCHAR,
> . . . . . . . . >   B.COL2 VARCHAR
> . . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
> No rows affected (1.182 seconds)
> {code}
> 2. Upsert some rows:
> {code}
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
> 1 row affected (0.138 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
> 1 row affected (0.009 

[jira] [Created] (PHOENIX-6023) Wrong result when issuing query for an immutable table with multiple column families

2020-07-17 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created PHOENIX-6023:
-

 Summary: Wrong result when issuing query for an immutable table 
with multiple column families
 Key: PHOENIX-6023
 URL: https://issues.apache.org/jira/browse/PHOENIX-6023
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Steps to reproduce are as follows:

1. Create an immutable table with multiple column families:
{code}
0: jdbc:phoenix:> CREATE TABLE TEST (
. . . . . . . . >   ID VARCHAR PRIMARY KEY,
. . . . . . . . >   A.COL1 VARCHAR,
. . . . . . . . >   B.COL2 VARCHAR
. . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
No rows affected (1.182 seconds)
{code}

2. Upsert some rows:
{code}
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
1 row affected (0.138 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
1 row affected (0.009 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
1 row affected (0.011 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
1 row affected (0.006 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
1 row affected (0.007 seconds)
0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
1 row affected (0.009 seconds)
{code}

3. Count query is okay:
{code}
0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
++
| COUNT(A.COL1)  |
++
| 5  |
++
1 row selected (0.1 seconds)
{code}

4. However, the following select query returns wrong result:
{code}
0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
+---+
| COL1  |
+---+
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
+---+
10 rows selected (0.058 seconds)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5662) The integration tests in phoenix-hive are broken

2020-01-07 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created PHOENIX-5662:
-

 Summary: The integration tests in phoenix-hive are broken
 Key: PHOENIX-5662
 URL: https://issues.apache.org/jira/browse/PHOENIX-5662
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


{{mvn verify}} doesn't run the integration tests in phoenix-hive. We need to 
fix it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5619) CREATE TABLE AS SELECT for Phoenix table doesn't work correctly in Hive

2019-12-13 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5619:
--
Environment: HDP-3.1.0

> CREATE TABLE AS SELECT for Phoenix table doesn't work correctly in Hive
> ---
>
> Key: PHOENIX-5619
> URL: https://issues.apache.org/jira/browse/PHOENIX-5619
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-3.1.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The steps to reproduce are as follows:
> 1. Create a table in Phoenix:
> {code:java}
> CREATE TABLE TEST (ID VARCHAR PRIMARY KEY, COL VARCHAR);
> {code}
> 2. Create a table in Hive that's based on the table in Phoenix created in the 
> step 1:
> {code:java}
> CREATE EXTERNAL TABLE test (id STRING, col STRING)
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES (
>   "phoenix.table.name" = "TEST",
>   "phoenix.zookeeper.quorum" = "",
>   "phoenix.zookeeper.znode.parent" = "/hbase-unsecure",
>   "phoenix.zookeeper.client.port" = "2181",
>   "phoenix.rowkeys" = "ID",
>   "phoenix.column.mapping" = "id:ID, col:COL"
> );
> {code}
> 3. Intert data to the Hive table in Hive:
> {code:java}
> INSERT INTO TABLE test VALUES ('id', 'col');
> {code}
> 4. Run CREATE TABLE AS SELECT in Hive
> {code:java}
> CREATE TABLE test2 AS SELECT * from test;
> {code}
>  
> After the step 4, I face the following error:
> {code:java}
> 2019-12-13 08:22:20,963 [DEBUG] [TezChild] |client.RpcRetryingCallerImpl|: 
> Call exception, tries=7, retries=16, started=8159 ms ago, cancelled=false, 
> msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase/meta-region-server, details=row 'SYSTEM:CATALOG' on table 
> 'hbase:meta' at null, exception=java.io.IOException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase/meta-region-server
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2009)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:785)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:741)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:712)
>   at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:594)
>   at 
> org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:72)
>   at 
> org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1066)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:389)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:441)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:438)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallable.call(RpcRetryingCallable.java:58)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3080)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3072)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:438)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1106)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1502)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2740)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> 

[jira] [Created] (PHOENIX-5619) CREATE TABLE AS SELECT for Phoenix table doesn't work correctly in Hive

2019-12-13 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created PHOENIX-5619:
-

 Summary: CREATE TABLE AS SELECT for Phoenix table doesn't work 
correctly in Hive
 Key: PHOENIX-5619
 URL: https://issues.apache.org/jira/browse/PHOENIX-5619
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


The steps to reproduce are as follows:

1. Create a table in Phoenix:
{code:java}
CREATE TABLE TEST (ID VARCHAR PRIMARY KEY, COL VARCHAR);
{code}
2. Create a table in Hive that's based on the table in Phoenix created in the 
step 1:
{code:java}
CREATE EXTERNAL TABLE test (id STRING, col STRING)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
  "phoenix.table.name" = "TEST",
  "phoenix.zookeeper.quorum" = "",
  "phoenix.zookeeper.znode.parent" = "/hbase-unsecure",
  "phoenix.zookeeper.client.port" = "2181",
  "phoenix.rowkeys" = "ID",
  "phoenix.column.mapping" = "id:ID, col:COL"
);
{code}
3. Intert data to the Hive table in Hive:
{code:java}
INSERT INTO TABLE test VALUES ('id', 'col');
{code}
4. Run CREATE TABLE AS SELECT in Hive
{code:java}
CREATE TABLE test2 AS SELECT * from test;
{code}
 

After the step 4, I face the following error:
{code:java}
2019-12-13 08:22:20,963 [DEBUG] [TezChild] |client.RpcRetryingCallerImpl|: Call 
exception, tries=7, retries=16, started=8159 ms ago, cancelled=false, 
msg=org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
NoNode for /hbase/meta-region-server, details=row 'SYSTEM:CATALOG' on table 
'hbase:meta' at null, exception=java.io.IOException: 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /hbase/meta-region-server
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2009)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:785)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:741)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:712)
at 
org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:594)
at 
org.apache.hadoop.hbase.client.HRegionLocator.getRegionLocation(HRegionLocator.java:72)
at 
org.apache.hadoop.hbase.client.RegionServerCallable.prepare(RegionServerCallable.java:223)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:105)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:386)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:360)
at 
org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1066)
at 
org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:389)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:441)
at 
org.apache.hadoop.hbase.client.HBaseAdmin$6.rpcCall(HBaseAdmin.java:438)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallable.call(RpcRetryingCallable.java:58)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3080)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3072)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:438)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1106)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1502)
at 
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2740)
at 
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
at 
org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2570)
at 

[jira] [Updated] (PHOENIX-5608) upgrading CATALOG table fails when setting phoenix.connection.autoCommit=true

2019-12-09 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5608:
--
Attachment: PHOENIX-5608.master.v1.patch

> upgrading CATALOG table fails when setting phoenix.connection.autoCommit=true
> -
>
> Key: PHOENIX-5608
> URL: https://issues.apache.org/jira/browse/PHOENIX-5608
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-3.1.4
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-5608.master.v1.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When setting phoenix.connection.autoCommit=true, upgrading CATALOG table 
> fails with the following error:
> {Code}
> Error: java.util.NoSuchElementException (state=,code=0)
> java.sql.SQLException: java.util.NoSuchElementException
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3211)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2616)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2533)
> at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2533)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
> at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
> at 
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
> at sqlline.Commands.connect(Commands.java:1064)
> at sqlline.Commands.connect(Commands.java:996)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
> at sqlline.SqlLine.dispatch(SqlLine.java:809)
> at sqlline.SqlLine.initArgs(SqlLine.java:588)
> at sqlline.SqlLine.begin(SqlLine.java:661)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.util.NoSuchElementException
> at java.util.Collections$EmptyIterator.next(Collections.java:4189)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnQualifierColumn(ConnectionQueryServicesImpl.java:3267)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemCatalogIfRequired(ConnectionQueryServicesImpl.java:2976)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3107)
> ... 21 more
> {Code}
> Looks like the following code assumes autoCommit=false, but autoCommit is 
> false, so upgrading fails:
> https://github.com/apache/phoenix/blob/5b84341d5b45421675fb2b0d1ffdd2cf46f8e395/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L3926-L4002
> I think we need to call PhoenixConnection.setAutoCommit(false) explicitly 
> here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5608) upgrading CATALOG table fails when setting phoenix.connection.autoCommit=true

2019-12-09 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5608:
--
Description: 
When setting phoenix.connection.autoCommit=true, upgrading CATALOG table fails 
with the following error:

{Code}
Error: java.util.NoSuchElementException (state=,code=0)
java.sql.SQLException: java.util.NoSuchElementException
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3211)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2616)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2533)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2533)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: java.util.NoSuchElementException
at java.util.Collections$EmptyIterator.next(Collections.java:4189)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnQualifierColumn(ConnectionQueryServicesImpl.java:3267)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemCatalogIfRequired(ConnectionQueryServicesImpl.java:2976)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3107)
... 21 more
{Code}

Looks like the following code assumes autoCommit=false, but autoCommit is true 
in this case, so upgrading fails:
https://github.com/apache/phoenix/blob/5b84341d5b45421675fb2b0d1ffdd2cf46f8e395/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L3926-L4002

I think we need to call PhoenixConnection.setAutoCommit(false) explicitly here.

  was:
When setting phoenix.connection.autoCommit=true, upgrading CATALOG table fails 
with the following error:

{Code}
Error: java.util.NoSuchElementException (state=,code=0)
java.sql.SQLException: java.util.NoSuchElementException
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3211)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2616)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2533)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2533)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
   

[jira] [Created] (PHOENIX-5608) upgrading CATALOG table fails when setting phoenix.connection.autoCommit=true

2019-12-08 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created PHOENIX-5608:
-

 Summary: upgrading CATALOG table fails when setting 
phoenix.connection.autoCommit=true
 Key: PHOENIX-5608
 URL: https://issues.apache.org/jira/browse/PHOENIX-5608
 Project: Phoenix
  Issue Type: Bug
 Environment: HDP-3.1.4
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


When setting phoenix.connection.autoCommit=true, upgrading CATALOG table fails 
with the following error:

{Code}
Error: java.util.NoSuchElementException (state=,code=0)
java.sql.SQLException: java.util.NoSuchElementException
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3211)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2616)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2533)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2533)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: java.util.NoSuchElementException
at java.util.Collections$EmptyIterator.next(Collections.java:4189)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnQualifierColumn(ConnectionQueryServicesImpl.java:3267)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemCatalogIfRequired(ConnectionQueryServicesImpl.java:2976)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3107)
... 21 more
{Code}

Looks like the following code assumes autoCommit=false, but autoCommit is 
false, so upgrading fails:
https://github.com/apache/phoenix/blob/5b84341d5b45421675fb2b0d1ffdd2cf46f8e395/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L3926-L4002

I think we need to call PhoenixConnection.setAutoCommit(false) explicitly here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5594) Different permission of phoenix-*-queryserver.log from umask

2019-11-27 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5594:
--
Description: 
The permission of phoenix-*-queryserver.log is different from umask we set.

For example, when we set umask to 077, the permission of 
phoenix-*-queryserver.log should be 600, but it's 666:
{code}
$ umask 077
$ /bin/queryserver.py start
starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log
$ ll /var/log/hbase/phoenix*
-rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log
-rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out
{code}

It looks like the permission of phoenix-*-queryserver.out is correct (600).

queryserver.py opens QueryServer process as a sub process but it looks like the 
umask is not inherited. I think we need to inherit the umask to the sub process.


  was:
The permission of phoenix-*-queryserver.log is different from umask we set.

For example, when we set umask to 077, the permission of 
phoenix-*-queryserver.log should be 600, but it's 666:
{code}
$ umask 077
$ /bin/queryserver.py start
starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log
$ ll /phoenix*
-rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log
-rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out
{code}

It looks like the permission of phoenix-*-queryserver.out is correct (600).

queryserver.py opens QueryServer process as a sub process but it looks like the 
umask is not inherited. I think we need to inherit the umask to the sub process.



> Different permission of phoenix-*-queryserver.log from umask
> 
>
> Key: PHOENIX-5594
> URL: https://issues.apache.org/jira/browse/PHOENIX-5594
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> The permission of phoenix-*-queryserver.log is different from umask we set.
> For example, when we set umask to 077, the permission of 
> phoenix-*-queryserver.log should be 600, but it's 666:
> {code}
> $ umask 077
> $ /bin/queryserver.py start
> starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log
> $ ll /var/log/hbase/phoenix*
> -rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log
> -rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out
> {code}
> It looks like the permission of phoenix-*-queryserver.out is correct (600).
> queryserver.py opens QueryServer process as a sub process but it looks like 
> the umask is not inherited. I think we need to inherit the umask to the sub 
> process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5594) Different permission of phoenix-*-queryserver.log from umask

2019-11-27 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created PHOENIX-5594:
-

 Summary: Different permission of phoenix-*-queryserver.log from 
umask
 Key: PHOENIX-5594
 URL: https://issues.apache.org/jira/browse/PHOENIX-5594
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


The permission of phoenix-*-queryserver.log is different from umask we set.

For example, when we set umask to 077, the permission of 
phoenix-*-queryserver.log should be 600, but it's 666:
{code}
$ umask 077
$ /bin/queryserver.py start
starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log
$ ll /phoenix*
-rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log
-rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out
{code}

It looks like the permission of phoenix-*-queryserver.out is correct (600).

queryserver.py opens QueryServer process as a sub process but it looks like the 
umask is not inherited. I think we need to inherit the umask to the sub process.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5552) Hive against Phoenix gets 'Expecting "RPAREN", got "L"' in Tez mode

2019-11-03 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5552:
--
Description: 
Steps to reproduce are as follows;

1. Create a table that has a BIGINT column in Phoenix:
{code:java}
CREATE TABLE TBL (
 COL1 VARCHAR PRIMARY KEY,
 COL2 BIGINT
);
{code}
2. Create an external table in Hive against the table created step 1:
{code:java}
create external table tbl (
 col1 string,
 col2 bigint
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
 "phoenix.table.name" = "TBL",
 "phoenix.zookeeper.quorum" = ...,
 "phoenix.zookeeper.znode.parent" = ...,
 "phoenix.zookeeper.client.port" = "2181",
 "phoenix.rowkeys" = "COL1",
 "phoenix.column.mapping" = "col1:COL1,col2:COL2"
);
{code}
3. Issue a query for the hive table with a condition of the BIGINT column in 
Tez mode, but the following error happens:
{code:java}
> select * from tbl where col2 = 100;
Error: java.io.IOException: java.lang.RuntimeException: 
org.apache.phoenix.exception.PhoenixParserException: ERROR 603 (42P00): Syntax 
error. Unexpected input. Expecting "RPAREN", got "L" at line 1, column 67. 
(state=,code=0)
{code}
In this case, the problem is that Hive passes whereClause "col2=100L" (as a 
bigint value with 'L') to Phoenix, but phoenix can't accept the bigint value 
with 'L', so the syntax error happens.

We need to remove 'L' for bigint values when building phoenix queries.

It looks like this issue happens only in Tez mode.

  was:
Steps to reproduce are as follows;

1. Create a table that has a BIGINT column in Phoenix:
{code:java}
CREATE TABLE TBL (
 COL1 VARCHAR PRIMARY KEY,
 COL2 BIGINT
);
{code}
2. Create an external table in Hive against the table created step 1:
{code:java}
create external table tbl (
 col1 string,
 col2 bigint
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
 "phoenix.table.name" = "TBL",
 "phoenix.zookeeper.quorum" = ...,
 "phoenix.zookeeper.znode.parent" = ...,
 "phoenix.zookeeper.client.port" = "2181",
 "phoenix.rowkeys" = "COL1",
 "phoenix.column.mapping" = "col1:COL1,col2:COL2"
);
{code}
3. Issue a query for the hive table with a condition of the BIGINT column in 
Tez mode, but the following error happens:
{code:java}
> select * from tbl where col2 = 100;
Error: java.io.IOException: java.lang.RuntimeException: 
org.apache.phoenix.exception.PhoenixParserException: ERROR 603 (42P00): Syntax 
error. Unexpected input. Expecting "RPAREN", got "L" at line 1, column 67. 
(state=,code=0)
{code}
In this case, the problem is that Hive passes whereClause "col2=100L" (as a 
bigint value with 'L') to Phoenix, but phoenix can't accept the bigint value 
with 'L', so the syntax error happens.

We need to remove 'L' for bigint values when building phoenix queries.


> Hive against Phoenix gets 'Expecting "RPAREN", got "L"' in Tez mode
> ---
>
> Key: PHOENIX-5552
> URL: https://issues.apache.org/jira/browse/PHOENIX-5552
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> Steps to reproduce are as follows;
> 1. Create a table that has a BIGINT column in Phoenix:
> {code:java}
> CREATE TABLE TBL (
>  COL1 VARCHAR PRIMARY KEY,
>  COL2 BIGINT
> );
> {code}
> 2. Create an external table in Hive against the table created step 1:
> {code:java}
> create external table tbl (
>  col1 string,
>  col2 bigint
> )
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES (
>  "phoenix.table.name" = "TBL",
>  "phoenix.zookeeper.quorum" = ...,
>  "phoenix.zookeeper.znode.parent" = ...,
>  "phoenix.zookeeper.client.port" = "2181",
>  "phoenix.rowkeys" = "COL1",
>  "phoenix.column.mapping" = "col1:COL1,col2:COL2"
> );
> {code}
> 3. Issue a query for the hive table with a condition of the BIGINT column in 
> Tez mode, but the following error happens:
> {code:java}
> > select * from tbl where col2 = 100;
> Error: java.io.IOException: java.lang.RuntimeException: 
> org.apache.phoenix.exception.PhoenixParserException: ERROR 603 (42P00): 
> Syntax error. Unexpected input. Expecting "RPAREN", got "L" at line 1, column 
> 67. (state=,code=0)
> {code}
> In this case, the problem is that Hive passes whereClause "col2=100L" (as a 
> bigint value with 'L') to Phoenix, but phoenix can't accept the bigint value 
> with 'L', so the syntax error happens.
> We need to remove 'L' for bigint values when building phoenix queries.
> It looks like this issue happens only in Tez mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5552) Hive against Phoenix gets 'Expecting "RPAREN", got "L"' in Tez mode

2019-10-29 Thread Toshihiro Suzuki (Jira)
Toshihiro Suzuki created PHOENIX-5552:
-

 Summary: Hive against Phoenix gets 'Expecting "RPAREN", got "L"' 
in Tez mode
 Key: PHOENIX-5552
 URL: https://issues.apache.org/jira/browse/PHOENIX-5552
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Steps to reproduce are as follows;

1. Create a table that has a BIGINT column in Phoenix:
{code:java}
CREATE TABLE TBL (
 COL1 VARCHAR PRIMARY KEY,
 COL2 BIGINT
);
{code}
2. Create an external table in Hive against the table created step 1:
{code:java}
create external table tbl (
 col1 string,
 col2 bigint
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
 "phoenix.table.name" = "TBL",
 "phoenix.zookeeper.quorum" = ...,
 "phoenix.zookeeper.znode.parent" = ...,
 "phoenix.zookeeper.client.port" = "2181",
 "phoenix.rowkeys" = "COL1",
 "phoenix.column.mapping" = "col1:COL1,col2:COL2"
);
{code}
3. Issue a query for the hive table with a condition of the BIGINT column in 
Tez mode, but the following error happens:
{code:java}
> select * from tbl where col2 = 100;
Error: java.io.IOException: java.lang.RuntimeException: 
org.apache.phoenix.exception.PhoenixParserException: ERROR 603 (42P00): Syntax 
error. Unexpected input. Expecting "RPAREN", got "L" at line 1, column 67. 
(state=,code=0)
{code}
In this case, the problem is that Hive passes whereClause "col2=100L" (as a 
bigint value with 'L') to Phoenix, but phoenix can't accept the bigint value 
with 'L', so the syntax error happens.

We need to remove 'L' for bigint values when building phoenix queries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Attachment: PHOENIX-5208.master.v1.patch

> Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns
> 
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> -Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
> expected.-
> -Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
> colDoesNotExist", then nothing will be changed in the table because 
> colDoesNotExist doesn't exists.-
> -The general expectation would be all non-existing columns in the statement 
> will be just ignored.-
>  
> Unlike ALTER TABLE ADD IF NOT EXISTS (see PHOENIX-1614 for details), ALTER 
> TABLE DROP COLUMN IF EXISTS works as expected. In this Jira, just add UT code 
> for "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Attachment: (was: PHOENIX-5208.master.v1.patch)

> Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns
> 
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> -Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
> expected.-
> -Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
> colDoesNotExist", then nothing will be changed in the table because 
> colDoesNotExist doesn't exists.-
> -The general expectation would be all non-existing columns in the statement 
> will be just ignored.-
>  
> Unlike ALTER TABLE ADD IF NOT EXISTS (see PHOENIX-1614 for details), ALTER 
> TABLE DROP COLUMN IF EXISTS works as expected. In this Jira, just add UT code 
> for "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Description: 
-Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.-

-Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.-

-The general expectation would be all non-existing columns in the statement 
will be just ignored.-

 

Unlike ALTER TABLE ADD IF NOT EXISTS (see PHOENIX-1614 for details), ALTER 
TABLE DROP COLUMN IF EXISTS works as expected. In this Jira, just add UT code 
for "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns.

  was:
-Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.-

-Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.-

-The general expectation would be all non-existing columns in the statement 
will be just ignored.-

 

Unlike ALTER TABLE ADD IF NOT EXISTS, ALTER TABLE DROP COLUMN IF EXISTS works 
as expected. In this Jira, just add UT code for "ALTER TABLE DROP COLUMN IF 
EXISTS" with multiple columns.


> Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns
> 
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> -Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
> expected.-
> -Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
> colDoesNotExist", then nothing will be changed in the table because 
> colDoesNotExist doesn't exists.-
> -The general expectation would be all non-existing columns in the statement 
> will be just ignored.-
>  
> Unlike ALTER TABLE ADD IF NOT EXISTS (see PHOENIX-1614 for details), ALTER 
> TABLE DROP COLUMN IF EXISTS works as expected. In this Jira, just add UT code 
> for "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Description: 
-Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.-

-Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.-

-The general expectation would be all non-existing columns in the statement 
will be just ignored.-

 

Unlike ALTER TABLE ADD IF NOT EXISTS, ALTER TABLE DROP COLUMN IF EXISTS works 
as expected. Just Add UT code for 

  was:
-Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.-

-Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.-

-The general expectation would be all non-existing columns in the statement 
will be just ignored.-

 

Unlike ALTER TABLE ADD IF NOT EXISTS, 


> Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns
> 
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> -Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
> expected.-
> -Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
> colDoesNotExist", then nothing will be changed in the table because 
> colDoesNotExist doesn't exists.-
> -The general expectation would be all non-existing columns in the statement 
> will be just ignored.-
>  
> Unlike ALTER TABLE ADD IF NOT EXISTS, ALTER TABLE DROP COLUMN IF EXISTS works 
> as expected. Just Add UT code for 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Description: 
-Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.-

-Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.-

-The general expectation would be all non-existing columns in the statement 
will be just ignored.-

 

Unlike ALTER TABLE ADD IF NOT EXISTS, ALTER TABLE DROP COLUMN IF EXISTS works 
as expected. In this Jira, just add UT code for "ALTER TABLE DROP COLUMN IF 
EXISTS" with multiple columns.

  was:
-Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.-

-Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.-

-The general expectation would be all non-existing columns in the statement 
will be just ignored.-

 

Unlike ALTER TABLE ADD IF NOT EXISTS, ALTER TABLE DROP COLUMN IF EXISTS works 
as expected. Just Add UT code for 


> Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns
> 
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> -Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
> expected.-
> -Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
> colDoesNotExist", then nothing will be changed in the table because 
> colDoesNotExist doesn't exists.-
> -The general expectation would be all non-existing columns in the statement 
> will be just ignored.-
>  
> Unlike ALTER TABLE ADD IF NOT EXISTS, ALTER TABLE DROP COLUMN IF EXISTS works 
> as expected. In this Jira, just add UT code for "ALTER TABLE DROP COLUMN IF 
> EXISTS" with multiple columns.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Description: 
-Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.-

-Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.-

-The general expectation would be all non-existing columns in the statement 
will be just ignored.-

 

Unlike ALTER TABLE ADD IF NOT EXISTS, 

  was:
Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.

Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.

The general expectation would be all non-existing columns in the statement will 
be just ignored.


> Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns
> 
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> -Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
> expected.-
> -Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
> colDoesNotExist", then nothing will be changed in the table because 
> colDoesNotExist doesn't exists.-
> -The general expectation would be all non-existing columns in the statement 
> will be just ignored.-
>  
> Unlike ALTER TABLE ADD IF NOT EXISTS, 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Summary: Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with 
multiple columns  (was: ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected)

> Add a test case of "ALTER TABLE DROP COLUMN IF EXISTS" with multiple columns
> 
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
> expected.
> Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
> colDoesNotExist", then nothing will be changed in the table because 
> colDoesNotExist doesn't exists.
> The general expectation would be all non-existing columns in the statement 
> will be just ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) ALTER TABLE DROP COLUMN IF EXISTS doesn't work as expected

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Description: 
Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
expected.

Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
colDoesNotExist", then nothing will be changed in the table because 
colDoesNotExist doesn't exists.

The general expectation would be all non-existing columns in the statement will 
be just ignored.

  was:
Similar to PHOENIX-1614, DROP COLUMN IF EXISTS doesn't work as expected.

Executing "DROP COLUMN IF EXISTS colAlreadyExists, colDoesNotExist", then 
nothing will be changed in the table because colDoesNotExist doesn't exists.

The general expectation would be all non-existing columns in the statement will 
be just ignored.


> ALTER TABLE DROP COLUMN IF EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> Similar to PHOENIX-1614, ALTER TABLE DROP COLUMN IF EXISTS doesn't work as 
> expected.
> Executing "ALTER TABLE DROP COLUMN IF EXISTS colAlreadyExists, 
> colDoesNotExist", then nothing will be changed in the table because 
> colDoesNotExist doesn't exists.
> The general expectation would be all non-existing columns in the statement 
> will be just ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) ALTER TABLE DROP COLUMN IF EXISTS doesn't work as expected

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Summary: ALTER TABLE DROP COLUMN IF EXISTS doesn't work as expected  (was: 
DROP COLUMN IF EXISTS doesn't work as expected)

> ALTER TABLE DROP COLUMN IF EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> Similar to PHOENIX-1614, DROP COLUMN IF EXISTS doesn't work as expected.
> Executing "DROP COLUMN IF EXISTS colAlreadyExists, colDoesNotExist", then 
> nothing will be changed in the table because colDoesNotExist doesn't exists.
> The general expectation would be all non-existing columns in the statement 
> will be just ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5208) DROP COLUMN IF EXISTS doesn't work as expected

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5208:
--
Attachment: PHOENIX-5208.master.v1.patch

> DROP COLUMN IF EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-5208
> URL: https://issues.apache.org/jira/browse/PHOENIX-5208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5208.master.v1.patch
>
>
> Similar to PHOENIX-1614, DROP COLUMN IF EXISTS doesn't work as expected.
> Executing "DROP COLUMN IF EXISTS colAlreadyExists, colDoesNotExist", then 
> nothing will be changed in the table because colDoesNotExist doesn't exists.
> The general expectation would be all non-existing columns in the statement 
> will be just ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5210) NullPointerException when alter options of a table that is appendOnlySchema

2019-10-20 Thread Toshihiro Suzuki (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5210:
--
Attachment: PHOENIX-5210.master.v2.patch

> NullPointerException when alter options of a table that is appendOnlySchema
> ---
>
> Key: PHOENIX-5210
> URL: https://issues.apache.org/jira/browse/PHOENIX-5210
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5210.master.v1.patch, 
> PHOENIX-5210.master.v2.patch
>
>
> I'm facing the following NullPointerException when alter options of a table 
> that is appendOnlySchema.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3545)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3517)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1440)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1833)
>   at 
> org.apache.phoenix.end2end.AppendOnlySchemaIT.testAlterTableOptions(AppendOnlySchemaIT.java:368)
> {code}
> Steps to reproduce is as follows:
> 1. Create a table that is appendOnlySchema:
> {code}
> CREATE TABLE tbl (id INTEGER PRIMARY KEY, col INTEGER) APPEND_ONLY_SCHEMA = 
> true, UPDATE_CACHE_FREQUENCY = 1;
> {code}
> 2. Alter a option of the table:
> {code}
> ALTER TABLE tbl SET STORE_NULLS = true;
> {code}
> After step 2, we will face the NullPointerException.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5210) NullPointerException when alter options of a table that is appendOnlySchema

2019-08-14 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5210:
--
Attachment: PHOENIX-5210.master.v1.patch

> NullPointerException when alter options of a table that is appendOnlySchema
> ---
>
> Key: PHOENIX-5210
> URL: https://issues.apache.org/jira/browse/PHOENIX-5210
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5210.master.v1.patch
>
>
> I'm facing the following NullPointerException when alter options of a table 
> that is appendOnlySchema.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3545)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3517)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1440)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1833)
>   at 
> org.apache.phoenix.end2end.AppendOnlySchemaIT.testAlterTableOptions(AppendOnlySchemaIT.java:368)
> {code}
> Steps to reproduce is as follows:
> 1. Create a table that is appendOnlySchema:
> {code}
> CREATE TABLE tbl (id INTEGER PRIMARY KEY, col INTEGER) APPEND_ONLY_SCHEMA = 
> true, UPDATE_CACHE_FREQUENCY = 1;
> {code}
> 2. Alter a option of the table:
> {code}
> ALTER TABLE tbl SET STORE_NULLS = true;
> {code}
> After step 2, we will face the NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5210) NullPointerException when alter options of a table that is appendOnlySchema

2019-08-14 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5210:
--
Attachment: (was: PHOENIX-5210.master.v1.patch)

> NullPointerException when alter options of a table that is appendOnlySchema
> ---
>
> Key: PHOENIX-5210
> URL: https://issues.apache.org/jira/browse/PHOENIX-5210
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5210.master.v1.patch
>
>
> I'm facing the following NullPointerException when alter options of a table 
> that is appendOnlySchema.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3545)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3517)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1440)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1833)
>   at 
> org.apache.phoenix.end2end.AppendOnlySchemaIT.testAlterTableOptions(AppendOnlySchemaIT.java:368)
> {code}
> Steps to reproduce is as follows:
> 1. Create a table that is appendOnlySchema:
> {code}
> CREATE TABLE tbl (id INTEGER PRIMARY KEY, col INTEGER) APPEND_ONLY_SCHEMA = 
> true, UPDATE_CACHE_FREQUENCY = 1;
> {code}
> 2. Alter a option of the table:
> {code}
> ALTER TABLE tbl SET STORE_NULLS = true;
> {code}
> After step 2, we will face the NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5210) NullPointerException when alter options of a table that is appendOnlySchema

2019-08-14 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5210:
--
Attachment: PHOENIX-5210.master.v1.patch

> NullPointerException when alter options of a table that is appendOnlySchema
> ---
>
> Key: PHOENIX-5210
> URL: https://issues.apache.org/jira/browse/PHOENIX-5210
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5210.master.v1.patch
>
>
> I'm facing the following NullPointerException when alter options of a table 
> that is appendOnlySchema.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3545)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3517)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1440)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1833)
>   at 
> org.apache.phoenix.end2end.AppendOnlySchemaIT.testAlterTableOptions(AppendOnlySchemaIT.java:368)
> {code}
> Steps to reproduce is as follows:
> 1. Create a table that is appendOnlySchema:
> {code}
> CREATE TABLE tbl (id INTEGER PRIMARY KEY, col INTEGER) APPEND_ONLY_SCHEMA = 
> true, UPDATE_CACHE_FREQUENCY = 1;
> {code}
> 2. Alter a option of the table:
> {code}
> ALTER TABLE tbl SET STORE_NULLS = true;
> {code}
> After step 2, we will face the NullPointerException.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5411) Incorrect result is returned when using sum function with case when statement

2019-07-25 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5411:
--
Attachment: PHOENIX-5411.master.v2.patch

> Incorrect result is returned when using sum function with case when statement
> -
>
> Key: PHOENIX-5411
> URL: https://issues.apache.org/jira/browse/PHOENIX-5411
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5411.master.v1.patch, 
> PHOENIX-5411.master.v2.patch
>
>
> In the following case, incorrect result is returned:
> {code}
> 0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, 
> col2 integer);
> No rows affected (0.86 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
> 1 row affected (0.078 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 
> end), sum(case when col1 is null then col2 else 0 end) from tbl;
> +---+---+
> | SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 
> IS NOT NULL THEN COL2 ELSE 0 END)  |
> +---+---+
> | 2 | 2   
>   |
> +---+---+
> 1 row selected (0.03 seconds)
> {code}
> The correct result is (2, 1), but (2, 2) is returned.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5411) Incorrect result is returned when using sum function with case when statement

2019-07-24 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5411:
--
Attachment: PHOENIX-5411.master.v1.patch

> Incorrect result is returned when using sum function with case when statement
> -
>
> Key: PHOENIX-5411
> URL: https://issues.apache.org/jira/browse/PHOENIX-5411
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5411.master.v1.patch
>
>
> In the following case, incorrect result is returned:
> {code}
> 0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, 
> col2 integer);
> No rows affected (0.86 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
> 1 row affected (0.078 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 
> end), sum(case when col1 is null then col2 else 0 end) from tbl;
> +---+---+
> | SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 
> IS NOT NULL THEN COL2 ELSE 0 END)  |
> +---+---+
> | 2 | 2   
>   |
> +---+---+
> 1 row selected (0.03 seconds)
> {code}
> The correct result is (2, 1), but (2, 2) is returned.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5411) Incorrect result is returned when using sum function with case when statement

2019-07-24 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5411:
--
Attachment: (was: PHOENIX-5209.4.14-HBase-1.3.v1.patch)

> Incorrect result is returned when using sum function with case when statement
> -
>
> Key: PHOENIX-5411
> URL: https://issues.apache.org/jira/browse/PHOENIX-5411
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> In the following case, incorrect result is returned:
> {code}
> 0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, 
> col2 integer);
> No rows affected (0.86 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
> 1 row affected (0.078 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 
> end), sum(case when col1 is null then col2 else 0 end) from tbl;
> +---+---+
> | SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 
> IS NOT NULL THEN COL2 ELSE 0 END)  |
> +---+---+
> | 2 | 2   
>   |
> +---+---+
> 1 row selected (0.03 seconds)
> {code}
> The correct result is (2, 1), but (2, 2) is returned.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5411) Incorrect result is returned when using sum function with case when statement

2019-07-24 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5411:
--
Attachment: PHOENIX-5209.4.14-HBase-1.3.v1.patch

> Incorrect result is returned when using sum function with case when statement
> -
>
> Key: PHOENIX-5411
> URL: https://issues.apache.org/jira/browse/PHOENIX-5411
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> In the following case, incorrect result is returned:
> {code}
> 0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, 
> col2 integer);
> No rows affected (0.86 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
> 1 row affected (0.078 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 
> end), sum(case when col1 is null then col2 else 0 end) from tbl;
> +---+---+
> | SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 
> IS NOT NULL THEN COL2 ELSE 0 END)  |
> +---+---+
> | 2 | 2   
>   |
> +---+---+
> 1 row selected (0.03 seconds)
> {code}
> The correct result is (2, 1), but (2, 2) is returned.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5411) Incorrect result is returned when using sum function with case when statement

2019-07-24 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5411:
--
Summary: Incorrect result is returned when using sum function with case 
when statement  (was: Incorrect result when using sum function with case when 
statement )

> Incorrect result is returned when using sum function with case when statement
> -
>
> Key: PHOENIX-5411
> URL: https://issues.apache.org/jira/browse/PHOENIX-5411
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> In the following case, incorrect result is returned:
> {code}
> 0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, 
> col2 integer);
> No rows affected (0.86 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
> 1 row affected (0.078 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 
> end), sum(case when col1 is null then col2 else 0 end) from tbl;
> +---+---+
> | SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 
> IS NOT NULL THEN COL2 ELSE 0 END)  |
> +---+---+
> | 2 | 2   
>   |
> +---+---+
> 1 row selected (0.03 seconds)
> {code}
> The correct result is (2, 1), but (2, 2) is returned.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5411) Incorrect result when using sum function with case when statement

2019-07-24 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5411:
--
Description: 
In the following case, incorrect result is returned:
{code}
0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, col2 
integer);
No rows affected (0.86 seconds)
0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
1 row affected (0.078 seconds)
0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
1 row affected (0.008 seconds)
0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 end), 
sum(case when col1 is null then col2 else 0 end) from tbl;
+---+---+
| SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 IS 
NOT NULL THEN COL2 ELSE 0 END)  |
+---+---+
| 2 | 2 
|
+---+---+
1 row selected (0.03 seconds)
{code}

The correct result is (2, 1), but (2, 2) is returned.



  was:
In the following case, incorrect result is returned:
{code}
0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, col2 
integer);
No rows affected (0.86 seconds)
0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
1 row affected (0.078 seconds)
0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
1 row affected (0.008 seconds)
0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 end), 
sum(case when col1 is null then col2 else 0 end) from tbl;
+---+---+
| SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 IS 
NOT NULL THEN COL2 ELSE 0 END)  |
+---+---+
| 2 | 2 
|
+---+---+
1 row selected (0.03 seconds)
{code}

The correct result is 2 and 1.




> Incorrect result when using sum function with case when statement 
> --
>
> Key: PHOENIX-5411
> URL: https://issues.apache.org/jira/browse/PHOENIX-5411
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> In the following case, incorrect result is returned:
> {code}
> 0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, 
> col2 integer);
> No rows affected (0.86 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
> 1 row affected (0.078 seconds)
> 0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 
> end), sum(case when col1 is null then col2 else 0 end) from tbl;
> +---+---+
> | SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 
> IS NOT NULL THEN COL2 ELSE 0 END)  |
> +---+---+
> | 2 | 2   
>   |
> +---+---+
> 1 row selected (0.03 seconds)
> {code}
> The correct result is (2, 1), but (2, 2) is returned.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (PHOENIX-5411) Incorrect result when using sum function with case when statement

2019-07-24 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created PHOENIX-5411:
-

 Summary: Incorrect result when using sum function with case when 
statement 
 Key: PHOENIX-5411
 URL: https://issues.apache.org/jira/browse/PHOENIX-5411
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


In the following case, incorrect result is returned:
{code}
0: jdbc:phoenix:> create table tbl (id varchar primary key, col1 varchar, col2 
integer);
No rows affected (0.86 seconds)
0: jdbc:phoenix:> upsert into tbl values('id1', 'aaa', 2);
1 row affected (0.078 seconds)
0: jdbc:phoenix:> upsert into tbl values('id2', null, 1);
1 row affected (0.008 seconds)
0: jdbc:phoenix:> select sum(case when col1 is not null then col2 else 0 end), 
sum(case when col1 is null then col2 else 0 end) from tbl;
+---+---+
| SUM(CASE WHEN COL1 IS NOT NULL THEN COL2 ELSE 0 END)  | SUM(CASE WHEN COL1 IS 
NOT NULL THEN COL2 ELSE 0 END)  |
+---+---+
| 2 | 2 
|
+---+---+
1 row selected (0.03 seconds)
{code}

The correct result is 2 and 1.





--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-07-13 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.14-HBase-1.4.v1.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.14-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.14-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.3.v2.patch, PHOENIX-5209.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v2.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v2.patch, PHOENIX-5209.master.v1.patch, 
> PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-07-13 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.14-HBase-1.3.v1.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.14-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.3.v1.patch, PHOENIX-5209.4.x-HBase-1.3.v2.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.4.v2.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v2.patch, 
> PHOENIX-5209.master.v1.patch, PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-07-13 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: (was: PHOENIX-5209.4.14-HBase-1.3.v1.patch)

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.3.v2.patch, PHOENIX-5209.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v2.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v2.patch, PHOENIX-5209.master.v1.patch, 
> PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-07-13 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.14-HBase-1.3.v1.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.14-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.3.v1.patch, PHOENIX-5209.4.x-HBase-1.3.v2.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.4.v2.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v2.patch, 
> PHOENIX-5209.master.v1.patch, PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-14 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.x-HBase-1.3.v2.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.3.v2.patch, PHOENIX-5209.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v2.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v2.patch, PHOENIX-5209.master.v1.patch, 
> PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-14 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.x-HBase-1.4.v2.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.4.v2.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v2.patch, 
> PHOENIX-5209.master.v1.patch, PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-13 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: (was: PHOENIX-5209.4.x-HBase-1.5.v2.patch)

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v2.patch, PHOENIX-5209.master.v1.patch, 
> PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-13 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.x-HBase-1.5.v2.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v2.patch, PHOENIX-5209.master.v1.patch, 
> PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-12 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: (was: PHOENIX-5209.4.x-HBase-1.5.v2.patch)

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v2.patch, PHOENIX-5209.master.v1.patch, 
> PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-12 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.x-HBase-1.5.v2.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v2.patch, PHOENIX-5209.4.x-HBase-1.5.v2.patch, 
> PHOENIX-5209.master.v1.patch, PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-11 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.x-HBase-1.5.v2.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.5.v2.patch, PHOENIX-5209.master.v1.patch, 
> PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-11 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.x-HBase-1.4.v1.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.master.v1.patch, PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-11 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.4.x-HBase-1.3.v1.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5209.4.x-HBase-1.4.v1.patch, PHOENIX-5209.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5209.master.v1.patch, PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-10 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.master.v2.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.master.v1.patch, 
> PHOENIX-5209.master.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-06-10 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Attachment: PHOENIX-5209.master.v1.patch

> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-5209.master.v1.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5210) NullPointerException when alter options of a table that is appendOnlySchema

2019-03-24 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created PHOENIX-5210:
-

 Summary: NullPointerException when alter options of a table that 
is appendOnlySchema
 Key: PHOENIX-5210
 URL: https://issues.apache.org/jira/browse/PHOENIX-5210
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


I'm facing the following NullPointerException when alter options of a table 
that is appendOnlySchema.

{code}
java.lang.NullPointerException
at 
org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3545)
at 
org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3517)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1440)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1833)
at 
org.apache.phoenix.end2end.AppendOnlySchemaIT.testAlterTableOptions(AppendOnlySchemaIT.java:368)
{code}

Steps to reproduce is as follows:
1. Create a table that is appendOnlySchema:
{code}
CREATE TABLE tbl (id INTEGER PRIMARY KEY, col INTEGER) APPEND_ONLY_SCHEMA = 
true, UPDATE_CACHE_FREQUENCY = 1;
{code}

2. Alter a option of the table:
{code}
ALTER TABLE tbl SET STORE_NULLS = true;
{code}

After step 2, we will face the NullPointerException.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-03-22 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created PHOENIX-5209:
-

 Summary: Cannot add non-PK column to table when the last PK column 
is of type VARBINARY or ARRAY
 Key: PHOENIX-5209
 URL: https://issues.apache.org/jira/browse/PHOENIX-5209
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Let's see we have the following table:
{code}
CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
{code}

The type of the primary kay of this table is VARBINARY.

And when we alter this table to add a new column:
{code}
ALTER TABLE tbl ADD col2 INTEGER
{code}
we are facing the following error:
{code}
java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when the 
last PK column is of type VARBINARY or ARRAY. columnName=ID
{code}

I think we should be able to do it without the above error because we don't try 
to add a PK column.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5209) Cannot add non-PK column to table when the last PK column is of type VARBINARY or ARRAY

2019-03-22 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-5209:
--
Description: 
Let's see we have the following table:
{code}
CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
{code}

The type of the primary key of this table is VARBINARY.

And when we alter this table to add a new column:
{code}
ALTER TABLE tbl ADD col2 INTEGER
{code}
we are facing the following error:
{code}
java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when the 
last PK column is of type VARBINARY or ARRAY. columnName=ID
{code}

I think we should be able to do it without the above error because we don't try 
to add a PK column.


  was:
Let's see we have the following table:
{code}
CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
{code}

The type of the primary kay of this table is VARBINARY.

And when we alter this table to add a new column:
{code}
ALTER TABLE tbl ADD col2 INTEGER
{code}
we are facing the following error:
{code}
java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when the 
last PK column is of type VARBINARY or ARRAY. columnName=ID
{code}

I think we should be able to do it without the above error because we don't try 
to add a PK column.



> Cannot add non-PK column to table when the last PK column is of type 
> VARBINARY or ARRAY
> ---
>
> Key: PHOENIX-5209
> URL: https://issues.apache.org/jira/browse/PHOENIX-5209
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
>
> Let's see we have the following table:
> {code}
> CREATE TABLE tbl (id VARBINARY PRIMARY KEY, col1 INTEGER)
> {code}
> The type of the primary key of this table is VARBINARY.
> And when we alter this table to add a new column:
> {code}
> ALTER TABLE tbl ADD col2 INTEGER
> {code}
> we are facing the following error:
> {code}
> java.sql.SQLException: ERROR 1015 (42J04): Cannot add column to table when 
> the last PK column is of type VARBINARY or ARRAY. columnName=ID
> {code}
> I think we should be able to do it without the above error because we don't 
> try to add a PK column.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5208) DROP COLUMN IF EXISTS doesn't work as expected

2019-03-22 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created PHOENIX-5208:
-

 Summary: DROP COLUMN IF EXISTS doesn't work as expected
 Key: PHOENIX-5208
 URL: https://issues.apache.org/jira/browse/PHOENIX-5208
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Similar to PHOENIX-1614, DROP COLUMN IF EXISTS doesn't work as expected.

Executing "DROP COLUMN IF EXISTS colAlreadyExists, colDoesNotExist", then 
nothing will be changed in the table because colDoesNotExist doesn't exists.

The general expectation would be all non-existing columns in the statement will 
be just ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1614) ALTER TABLE ADD IF NOT EXISTS doesn't work as expected

2019-03-21 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-1614:
--
Attachment: PHOENIX-1614-v5.patch

> ALTER TABLE ADD IF NOT EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-1614
> URL: https://issues.apache.org/jira/browse/PHOENIX-1614
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gabriel Reid
>Assignee: Toshihiro Suzuki
>Priority: Major
>  Labels: argus
> Fix For: 4.15.0
>
> Attachments: PHOENIX-1614-v2.patch, PHOENIX-1614-v3.patch, 
> PHOENIX-1614-v4.patch, PHOENIX-1614-v5.patch, PHOENIX-1614.patch
>
>
> On an existing table table, executing "ALTER TABLE ADD IF NOT EXISTS
> thisColAlreadyExists varchar, thisColDoesNotExist varchar", then
> nothing will be changed in the table because thisColAlreadyExists
> already exists.
> Omitting the already-existing column from the statement, all new columns
> do get created.
> The general expectation would be that when you use ADD IF NOT EXISTS, all
> non-existent columns will be added, and all existing columns in the
> statement will just be ignored. There is already an integration test
> (AlterTableIT#testAddVarCols) that actually demonstrates the current
> behavior, although this is probably not correct.
> As pointed out in the related mailing list thread [1], ALTER TABLE DROP 
> COLUMN likely suffers from the same issue.
> 1. http://s.apache.org/LMT 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1614) ALTER TABLE ADD IF NOT EXISTS doesn't work as expected

2019-03-21 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-1614:
--
Attachment: PHOENIX-1614-v4.patch

> ALTER TABLE ADD IF NOT EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-1614
> URL: https://issues.apache.org/jira/browse/PHOENIX-1614
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gabriel Reid
>Assignee: Toshihiro Suzuki
>Priority: Major
>  Labels: argus
> Fix For: 4.15.0
>
> Attachments: PHOENIX-1614-v2.patch, PHOENIX-1614-v3.patch, 
> PHOENIX-1614-v4.patch, PHOENIX-1614.patch
>
>
> On an existing table table, executing "ALTER TABLE ADD IF NOT EXISTS
> thisColAlreadyExists varchar, thisColDoesNotExist varchar", then
> nothing will be changed in the table because thisColAlreadyExists
> already exists.
> Omitting the already-existing column from the statement, all new columns
> do get created.
> The general expectation would be that when you use ADD IF NOT EXISTS, all
> non-existent columns will be added, and all existing columns in the
> statement will just be ignored. There is already an integration test
> (AlterTableIT#testAddVarCols) that actually demonstrates the current
> behavior, although this is probably not correct.
> As pointed out in the related mailing list thread [1], ALTER TABLE DROP 
> COLUMN likely suffers from the same issue.
> 1. http://s.apache.org/LMT 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1614) ALTER TABLE ADD IF NOT EXISTS doesn't work as expected

2019-02-27 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-1614:
--
Attachment: PHOENIX-1614-v3.patch

> ALTER TABLE ADD IF NOT EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-1614
> URL: https://issues.apache.org/jira/browse/PHOENIX-1614
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gabriel Reid
>Assignee: Toshihiro Suzuki
>Priority: Major
>  Labels: argus
> Fix For: 4.15.0
>
> Attachments: PHOENIX-1614-v2.patch, PHOENIX-1614-v3.patch, 
> PHOENIX-1614.patch
>
>
> On an existing table table, executing "ALTER TABLE ADD IF NOT EXISTS
> thisColAlreadyExists varchar, thisColDoesNotExist varchar", then
> nothing will be changed in the table because thisColAlreadyExists
> already exists.
> Omitting the already-existing column from the statement, all new columns
> do get created.
> The general expectation would be that when you use ADD IF NOT EXISTS, all
> non-existent columns will be added, and all existing columns in the
> statement will just be ignored. There is already an integration test
> (AlterTableIT#testAddVarCols) that actually demonstrates the current
> behavior, although this is probably not correct.
> As pointed out in the related mailing list thread [1], ALTER TABLE DROP 
> COLUMN likely suffers from the same issue.
> 1. http://s.apache.org/LMT 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1614) ALTER TABLE ADD IF NOT EXISTS doesn't work as expected

2019-02-27 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-1614:
--
Attachment: PHOENIX-1614-v2.patch

> ALTER TABLE ADD IF NOT EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-1614
> URL: https://issues.apache.org/jira/browse/PHOENIX-1614
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gabriel Reid
>Assignee: Toshihiro Suzuki
>Priority: Major
>  Labels: argus
> Fix For: 4.15.0
>
> Attachments: PHOENIX-1614-v2.patch, PHOENIX-1614.patch
>
>
> On an existing table table, executing "ALTER TABLE ADD IF NOT EXISTS
> thisColAlreadyExists varchar, thisColDoesNotExist varchar", then
> nothing will be changed in the table because thisColAlreadyExists
> already exists.
> Omitting the already-existing column from the statement, all new columns
> do get created.
> The general expectation would be that when you use ADD IF NOT EXISTS, all
> non-existent columns will be added, and all existing columns in the
> statement will just be ignored. There is already an integration test
> (AlterTableIT#testAddVarCols) that actually demonstrates the current
> behavior, although this is probably not correct.
> As pointed out in the related mailing list thread [1], ALTER TABLE DROP 
> COLUMN likely suffers from the same issue.
> 1. http://s.apache.org/LMT 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-1614) ALTER TABLE ADD IF NOT EXISTS doesn't work as expected

2019-02-26 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-1614:
--
Attachment: PHOENIX-1614.patch

> ALTER TABLE ADD IF NOT EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-1614
> URL: https://issues.apache.org/jira/browse/PHOENIX-1614
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gabriel Reid
>Assignee: Toshihiro Suzuki
>Priority: Major
>  Labels: argus
> Fix For: 4.15.0
>
> Attachments: PHOENIX-1614.patch
>
>
> On an existing table table, executing "ALTER TABLE ADD IF NOT EXISTS
> thisColAlreadyExists varchar, thisColDoesNotExist varchar", then
> nothing will be changed in the table because thisColAlreadyExists
> already exists.
> Omitting the already-existing column from the statement, all new columns
> do get created.
> The general expectation would be that when you use ADD IF NOT EXISTS, all
> non-existent columns will be added, and all existing columns in the
> statement will just be ignored. There is already an integration test
> (AlterTableIT#testAddVarCols) that actually demonstrates the current
> behavior, although this is probably not correct.
> As pointed out in the related mailing list thread [1], ALTER TABLE DROP 
> COLUMN likely suffers from the same issue.
> 1. http://s.apache.org/LMT 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-1614) ALTER TABLE ADD IF NOT EXISTS doesn't work as expected

2018-12-14 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki reassigned PHOENIX-1614:
-

Assignee: Toshihiro Suzuki

> ALTER TABLE ADD IF NOT EXISTS doesn't work as expected
> --
>
> Key: PHOENIX-1614
> URL: https://issues.apache.org/jira/browse/PHOENIX-1614
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gabriel Reid
>Assignee: Toshihiro Suzuki
>Priority: Major
>  Labels: argus
> Fix For: 4.15.0
>
>
> On an existing table table, executing "ALTER TABLE ADD IF NOT EXISTS
> thisColAlreadyExists varchar, thisColDoesNotExist varchar", then
> nothing will be changed in the table because thisColAlreadyExists
> already exists.
> Omitting the already-existing column from the statement, all new columns
> do get created.
> The general expectation would be that when you use ADD IF NOT EXISTS, all
> non-existent columns will be added, and all existing columns in the
> statement will just be ignored. There is already an integration test
> (AlterTableIT#testAddVarCols) that actually demonstrates the current
> behavior, although this is probably not correct.
> As pointed out in the related mailing list thread [1], ALTER TABLE DROP 
> COLUMN likely suffers from the same issue.
> 1. http://s.apache.org/LMT 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16460484#comment-16460484
 ] 

Toshihiro Suzuki commented on PHOENIX-4712:
---

Thank you [~jamestaylor] [~tdsilva]. +1 to the latest patch.

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712-v2.patch, PHOENIX-4712.patch, 
> PHOENIX-4712.patch, PHOENIX-4712_v3.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459594#comment-16459594
 ] 

Toshihiro Suzuki commented on PHOENIX-4712:
---

Thank you for your comment [~tdsilva]. I just attached the v2 patch. In this 
patch, I changed to remove views of the table from the connection cache as you 
mentioned in the last comment. Could you please take a look at the patch? 
Thanks.


> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712-v2.patch, PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-05-01 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4712:
--
Attachment: PHOENIX-4712-v2.patch

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712-v2.patch, PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456674#comment-16456674
 ] 

Toshihiro Suzuki commented on PHOENIX-4712:
---

[~jamestaylor] No, I didn't configure "phoenix.default.update.cache.frequency". 
I will take a look at addIndexesFromParent() method.  Thanks.



> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456623#comment-16456623
 ] 

Toshihiro Suzuki commented on PHOENIX-4712:
---

[~jamestaylor] No, I didn't set UPDATE_CACHE_FREQUENCY to any tables and views. 
The DDLs are in the Description.

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16456582#comment-16456582
 ] 

Toshihiro Suzuki commented on PHOENIX-4712:
---

I just attached a v1 patch. 

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4712:
--
Attachment: PHOENIX-4712.patch

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4712:
--
Description: 
Steps to reproduce are as follows:
1. Create a table
{code}
create table tbl (col1 varchar primary key, col2 varchar);
{code}

2. Create a view on the table
{code}
create view vw (col3 varchar) as select * from tbl;
{code}

3. Create a index on the table
{code}
create index idx ON tbl (col2);
{code}

After those, when issuing a explain query like the following, it seems like the 
query doesn't use the index, although the index should be used: 
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
'aaa';
+---+
| PLAN  |
+---+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
| SERVER FILTER BY COL2 = 'aaa' |
+---+
{code}

However, after restarting sqlline, the explain output is changed, and the index 
is used.
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
'aaa';
++
|  PLAN 
 |
++
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  
 |
| SKIP-SCAN-JOIN TABLE 0
 |
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX ['aaa'] 
 |
| SERVER FILTER BY FIRST KEY ONLY   
 |
| DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5) 
 |
++
{code}

I think when creating an index on a table, meta data cache of views related to 
the table isn't updated, so the index isn't used for that query. However after 
restarting sqlline, the meta data cache is refreshed, so the index is used.

When creating an index on a table, we should update meta data cache of views 
related to the table.

  was:
Steps to reproduce are as follows:
1. Create a table
{code}
create table tbl (aaa varchar primary key, bbb varchar);
{code}

2. Create a view on the table
{code}
create view vw (ccc varchar) as select * from tbl;
{code}

3. Create a index on the table
{code}
create index idx ON tbl (bbb);
{code}

After those, when issuing a explain query like the following, it seems like the 
query doesn't use the index, although the index should be used: 
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where bbb = 
'aaa';
+---+
| PLAN  |
+---+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
| SERVER FILTER BY BBB = 'aaa'  |
+---+
{code}

However, after restarting sqlline, the explain output is changed, and the index 
is used.
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where bbb = 
'aaa';
++
|  PLAN 
 |
++
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  
 |
| SKIP-SCAN-JOIN TABLE 0
 |
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX ['aaa'] 
 |
| SERVER FILTER BY FIRST KEY ONLY   
 |
| DYNAMIC SERVER FILTER BY "VW.AAA" IN ($3.$5)  
 |
++
{code}

I think when creating an index on a table, meta data cache of views related to 
the table isn't updated, so the index isn't used for that query. However after 
restarting sqlline, the meta data cache is refreshed, so the index is used.

When creating an index on a table, we should update meta data cache of views 
related to the table.


> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
>   

[jira] [Created] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created PHOENIX-4712:
-

 Summary: When creating an index on a table, meta data cache of 
views related to the table isn't updated
 Key: PHOENIX-4712
 URL: https://issues.apache.org/jira/browse/PHOENIX-4712
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Steps to reproduce are as follows:
1. Create a table
{code}
create table tbl (aaa varchar primary key, bbb varchar);
{code}

2. Create a view on the table
{code}
create view vw (ccc varchar) as select * from tbl;
{code}

3. Create a index on the table
{code}
create index idx ON tbl (bbb);
{code}

After those, when issuing a explain query like the following, it seems like the 
query doesn't use the index, although the index should be used: 
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where bbb = 
'aaa';
+---+
| PLAN  |
+---+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
| SERVER FILTER BY BBB = 'aaa'  |
+---+
{code}

However, after restarting sqlline, the explain output is changed, and the index 
is used.
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where bbb = 
'aaa';
++
|  PLAN 
 |
++
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  
 |
| SKIP-SCAN-JOIN TABLE 0
 |
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX ['aaa'] 
 |
| SERVER FILTER BY FIRST KEY ONLY   
 |
| DYNAMIC SERVER FILTER BY "VW.AAA" IN ($3.$5)  
 |
++
{code}

I think when creating an index on a table, meta data cache of views related to 
the table isn't updated, so the index isn't used for that query. However after 
restarting sqlline, the meta data cache is refreshed, so the index is used.

When creating an index on a table, we should update meta data cache of views 
related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-10 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431779#comment-16431779
 ] 

Toshihiro Suzuki commented on PHOENIX-4658:
---

Thank you very much for reviewing [~jamestaylor].

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658-v2.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-10 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4658:
--
Description: 
Steps to reproduce are as follows:

1. Create a table with multiple column families
{code}
CREATE TABLE TBL (
  COL1 VARCHAR NOT NULL,
  COL2 VARCHAR NOT NULL,
  COL3 VARCHAR,
  FAM.COL4 VARCHAR,
  CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
)
{code}

2. Upsert a row
{code}
UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
{code}

3. Query with DESC for the table
{code}
SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
{code}

By following the above steps, we face the following exception.
{code}
java.util.concurrent.ExecutionException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
called on ReversedKeyValueHeap
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
ReversedKeyValueHeap
at 
org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
... 10 more
{code}


  was:
Steps to reproduce are as follows:

1. Create a table with multiple column families (default column family and 
"FAM")
{code}
CREATE TABLE TBL (
  COL1 VARCHAR NOT NULL,
  COL2 VARCHAR NOT NULL,
  COL3 VARCHAR,
  FAM.COL4 VARCHAR,
  CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
)
{code}

2. Upsert a row
{code}
UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
{code}

3. Query with DESC for the table
{code}
SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
{code}

By following the above steps, we face the following exception.
{code}
java.util.concurrent.ExecutionException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: 
TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
called on ReversedKeyValueHeap
at 
org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
at 
org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
at 

[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-10 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431773#comment-16431773
 ] 

Toshihiro Suzuki commented on PHOENIX-4669:
---

Thank you very much for reviewing and committing the patch. [~sergey.soldatov]

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669-v3.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> 

[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-09 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431664#comment-16431664
 ] 

Toshihiro Suzuki commented on PHOENIX-4658:
---

Ping [~jamestaylor] [~an...@apache.org]. Could you please review the patch?

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658-v2.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-09 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431662#comment-16431662
 ] 

Toshihiro Suzuki commented on PHOENIX-4669:
---

Ping [~sergey.soldatov] [~an...@apache.org]. Could you please review the patch?

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669-v3.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 

[jira] [Comment Edited] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-06 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428061#comment-16428061
 ] 

Toshihiro Suzuki edited comment on PHOENIX-4669 at 4/6/18 7:31 AM:
---

Could you please review the patch when you have time? [~sergey.soldatov]


was (Author: brfrn169):
Could you review the patch when you have time? [~sergey.soldatov]

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669-v3.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at 

[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-06 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428061#comment-16428061
 ] 

Toshihiro Suzuki commented on PHOENIX-4669:
---

Could you review the patch when you have time? [~sergey.soldatov]

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669-v3.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> 

[jira] [Comment Edited] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-06 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428063#comment-16428063
 ] 

Toshihiro Suzuki edited comment on PHOENIX-4658 at 4/6/18 7:31 AM:
---

Could you please review the patch when you have time? [~jamestaylor]


was (Author: brfrn169):
Could you review the patch when you have time? [~jamestaylor]

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658-v2.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-06 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16428063#comment-16428063
 ] 

Toshihiro Suzuki commented on PHOENIX-4658:
---

Could you review the patch when you have time? [~jamestaylor]

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658-v2.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-03 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424733#comment-16424733
 ] 

Toshihiro Suzuki edited comment on PHOENIX-4669 at 4/3/18 11:20 PM:


Thank you [~sergey.soldatov]. I just attached a new patch for your review. 
Thanks.


was (Author: brfrn169):
Thank you [~sergey.soldatov]. I just attach a new patch for your review. Thanks.

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669-v3.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at 

[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-03 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16424733#comment-16424733
 ] 

Toshihiro Suzuki commented on PHOENIX-4669:
---

Thank you [~sergey.soldatov]. I just attach a new patch for your review. Thanks.

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669-v3.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   

[jira] [Updated] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-03 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4669:
--
Attachment: PHOENIX-4669-v3.patch

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669-v3.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> 

[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-03 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423578#comment-16423578
 ] 

Toshihiro Suzuki commented on PHOENIX-4658:
---

I just attached the v2 patch. Could you please review this patch? [~jamestaylor]

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658-v2.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-03 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4658:
--
Attachment: PHOENIX-4658-v2.patch

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658-v2.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch, PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-02 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16423350#comment-16423350
 ] 

Toshihiro Suzuki commented on PHOENIX-4658:
---

Thank you for your comment [~jamestaylor]. I agree with you. I will try to add 
FORWARD_SCAN hint in this Jira.

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-04-02 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422614#comment-16422614
 ] 

Toshihiro Suzuki commented on PHOENIX-4669:
---

[~sergey.soldatov] Could you please review the v2 patch?

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> 

[jira] [Commented] (PHOENIX-4658) IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap

2018-04-02 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16422606#comment-16422606
 ] 

Toshihiro Suzuki commented on PHOENIX-4658:
---

According to stack's comment in HBASE-20219, it seems like this issue needs to 
be fixed on Phoenix side.
https://issues.apache.org/jira/browse/HBASE-20219?focusedCommentId=16421580=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16421580

I think we can commit the patch attached in this Jira and then create a new 
Jira to introduce FORWARD_SCAN hint. What do you think? [~jamestaylor]
If you agree, I will create a new Jira and could you please commit the patch?

Thanks.

> IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap
> ---
>
> Key: PHOENIX-4658
> URL: https://issues.apache.org/jira/browse/PHOENIX-4658
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4658.patch, PHOENIX-4658.patch, 
> PHOENIX-4658.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with multiple column families (default column family and 
> "FAM")
> {code}
> CREATE TABLE TBL (
>   COL1 VARCHAR NOT NULL,
>   COL2 VARCHAR NOT NULL,
>   COL3 VARCHAR,
>   FAM.COL4 VARCHAR,
>   CONSTRAINT TRADE_EVENT_PK PRIMARY KEY (COL1, COL2)
> )
> {code}
> 2. Upsert a row
> {code}
> UPSERT INTO TBL (COL1, COL2) values ('AAA', 'BBB')
> {code}
> 3. Query with DESC for the table
> {code}
> SELECT * FROM TBL WHERE COL2 = 'BBB' ORDER BY COL1 DESC
> {code}
> By following the above steps, we face the following exception.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TBL,,1521251842845.153781990c0fb4bc34e3f2c721a6f415.: requestSeek cannot be 
> called on ReversedKeyValueHeap
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2808)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3045)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.lang.IllegalStateException: requestSeek cannot be called on 
> ReversedKeyValueHeap
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedKeyValueHeap.requestSeek(ReversedKeyValueHeap.java:65)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:6485)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6412)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6126)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6112)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 10 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-29 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16418598#comment-16418598
 ] 

Toshihiro Suzuki commented on PHOENIX-4669:
---

Thank you for your review [~sergey.soldatov]!
I just attached the v2 patch for the review. I realized that we don't need to 
pass the "familyPropList" to ensureViewIndexTableCreated() thanks to the 
review, so I reverted that in the v2 patch.
Thanks.

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at 

[jira] [Updated] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-29 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4669:
--
Attachment: (was: PHOENIX-PHOENIX-4669-v2.patch)

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)

[jira] [Updated] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-29 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4669:
--
Attachment: PHOENIX-PHOENIX-4669-v2.patch

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 

[jira] [Updated] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-29 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4669:
--
Attachment: PHOENIX-4669-v2.patch

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669-v2.patch, 
> PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> 

[jira] [Assigned] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-29 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki reassigned PHOENIX-4669:
-

Assignee: Toshihiro Suzuki

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> 

[jira] [Comment Edited] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-27 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16413446#comment-16413446
 ] 

Toshihiro Suzuki edited comment on PHOENIX-4669 at 3/28/18 3:12 AM:


I attached the patch for this issue.

I think the problem is when creating an index on views, phoenix creates the 
family gotten by SchemaUtil.getEmptyColumnFamily(table) (table is the parent 
table):
https://github.com/apache/phoenix/blob/96f2c8b3b88c57018f8c8fe1ba2bb9846e865eb2/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L1694-L1698

In the above code, the local value "families" is always empty, so phoenix 
always creates the family gotten by SchemaUtil.getEmptyColumnFamily(table).

I think it is wrong, and necessary column families are decided in 
MetaDataClient.createTableInternal() as the local variable "familyPropList":
https://github.com/apache/phoenix/blob/96f2c8b3b88c57018f8c8fe1ba2bb9846e865eb2/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L2427

So I think we can use this when creating a view index. To do this, I added the 
families argument to ensureViewIndexTableCreated() and pass the 
"familyPropList" to this method.

[~sergey.soldatov] Could you please review the patch?

Thanks



was (Author: brfrn169):
I attached the patch for this issue.

I think the problem is when creating an index on views, phoenix creates the 
family gotten by SchemaUtil.getEmptyColumnFamily(table) (table is the parent 
table):
https://github.com/apache/phoenix/blob/96f2c8b3b88c57018f8c8fe1ba2bb9846e865eb2/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L1694-L1698

In the above code, the local value families is always empty, so phoenix always 
creates the family gotten by SchemaUtil.getEmptyColumnFamily(table).

I think it is wrong, and necessary column families are decided in 
MetaDataClient.createTableInternal() as the local variable familyPropList:
https://github.com/apache/phoenix/blob/96f2c8b3b88c57018f8c8fe1ba2bb9846e865eb2/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L2427

So I think we can use this when creating a view index. To do this, I added the 
families argument to ensureViewIndexTableCreated() and pass the familyPropList 
to this method.

[~sergey.soldatov] Could you please review the patch?

Thanks


> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> 

[jira] [Comment Edited] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-27 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16413446#comment-16413446
 ] 

Toshihiro Suzuki edited comment on PHOENIX-4669 at 3/28/18 3:08 AM:


I attached the patch for this issue.

I think the problem is when creating an index on views, phoenix creates the 
family gotten by SchemaUtil.getEmptyColumnFamily(table) (table is the parent 
table):
https://github.com/apache/phoenix/blob/96f2c8b3b88c57018f8c8fe1ba2bb9846e865eb2/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L1694-L1698

In the above code, the local value families is always empty, so phoenix always 
creates the family gotten by SchemaUtil.getEmptyColumnFamily(table).

I think it is wrong, and necessary column families are decided in 
MetaDataClient.createTableInternal() as the local variable familyPropList:
https://github.com/apache/phoenix/blob/96f2c8b3b88c57018f8c8fe1ba2bb9846e865eb2/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java#L2427

So I think we can use this when creating a view index. To do this, I added the 
families argument to ensureViewIndexTableCreated() and pass the familyPropList 
to this method.

[~sergey.soldatov] Could you please review the patch?

Thanks



was (Author: brfrn169):
I attached the patch for this issue.
I think necessary column families are decided in 
MetaDataClient.createTableInternal() (as the local variable familyPropList), so 
we can use this when creating a view index (in ensureViewIndexTableCreated()). 
To do this, I added the families argument to ensureViewIndexTableCreated().

[~sergey.soldatov] Are you working on this Jira? If yes, please go ahead and 
ignore my patch. If no, could you please review the patch?

Thanks


> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> 

[jira] [Commented] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-26 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16413446#comment-16413446
 ] 

Toshihiro Suzuki commented on PHOENIX-4669:
---

I attached the patch for this issue.
I think necessary column families are decided in 
MetaDataClient.createTableInternal() (as the local variable familyPropList), so 
we can use this when creating a view index (in ensureViewIndexTableCreated()). 
To do this, I added the families argument to ensureViewIndexTableCreated().

[~sergey.soldatov] Are you working on this Jira? If yes, please go ahead and 
ignore my patch. If no, could you please review the patch?

Thanks


> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> 

[jira] [Updated] (PHOENIX-4669) NoSuchColumnFamilyException when creating index on views that are built on tables which have named column family

2018-03-26 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4669:
--
Attachment: PHOENIX-4669.patch

> NoSuchColumnFamilyException when creating index on views that are built on 
> tables which have named column family
> 
>
> Key: PHOENIX-4669
> URL: https://issues.apache.org/jira/browse/PHOENIX-4669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4669-UT.patch, PHOENIX-4669.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table with specified column family
> {code}
> CREATE TABLE TBL (COL1 VARCHAR PRIMARY KEY, CF.COL2 VARCHAR)
> {code}
> 2. Upsert data into the table
> {code}
> UPSERT INTO TBL VALUES ('AAA','BBB')
> {code}
> 3. Create a view on the table
> {code}
> CREATE VIEW VW AS SELECT * FROM TBL
> {code}
> 4. Create a index on the view
> {code}
> CREATE INDEX IDX ON VW (CF.COL2)
> {code}
> By following the above steps, I faced the following error.
> {code}
> Exception in thread "main" org.apache.phoenix.execute.CommitException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: 
> Column family 0 does not exist in region 
> _IDX_TBL,,1521731699609.99c9a48534aeca079c1ff614293fd13a. in table 
> '_IDX_TBL', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder',
>  METADATA => {'IS_VIEW_INDEX_TABLE' => '\x01'}}, {NAME => 'CF', BLOOMFILTER 
> => 'NONE', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 
> 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 
> 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', 
> REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:903)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:822)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2376)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2352)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> : 1 time, servers with issues: 10.0.1.3,57208,1521731670016
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1107)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1273)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:663)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:659)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:659)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3513)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1362)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1674)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:398)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:381)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:368)
>   at 
> 

  1   2   >