[jira] [Commented] (PHOENIX-4751) Support client-side hash aggregation with SORT_MERGE_JOIN

2018-05-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488452#comment-16488452
 ] 

James Taylor commented on PHOENIX-4751:
---

Yes, you're right [~sangudi] - since the aggregation is done after the join, 
the SORT-MERGE-JOIN will be completely done on the client (and hence would 
benefit from being able to do a hash aggregation instead). The 
SpillableGroupByCache is used on the server-side hash aggregation. It would 
work on the client side as well (if you can write to the file system). It 
basically tries to do everything in memory and then past a memory threshold 
will spill to disk.

> Support client-side hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>  keyA BIGINT NOT NULL,
>  keyB BIGINT NOT NULL,
>  val SMALLINT,
>  CONSTRAINT pk PRIMARY KEY (keyA, keyB)
>  );
> EXPLAIN
>  SELECT /*+ USE_SORT_MERGE_JOIN */ 
>  t1.val v1, t2.val v2, COUNT(\*) c 
>  FROM unsalted t1 JOIN unsalted t2 
>  ON (t1.keyA = t2.keyA) 
>  GROUP BY t1.val, t2.val;
>  
> +-+++--+
> |PLAN|EST_BYTES_READ|EST_ROWS_READ| |
> +-+++--+
> |SORT-MERGE-JOIN (INNER) TABLES|null|null| |
> |    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
> |AND|null|null| |
> |    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
> |CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
> |CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |
> +-+++--+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4751) Support client-side hash aggregation with SORT_MERGE_JOIN

2018-05-23 Thread Gerald Sangudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gerald Sangudi updated PHOENIX-4751:

Description: 
A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
aggregation in some cases, for improved performance.

When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
aggregation. It instead performs a CLIENT SORT followed by a CLIENT AGGREGATE. 
The performance can be improved if (a) the GROUP BY output does not need to be 
sorted, and (b) the GROUP BY input is large enough and has low cardinality.

The hash aggregation can initially be a hint. Here is an example from Phoenix 
4.13.1 that would benefit from hash aggregation if the GROUP BY input is large 
with low cardinality.

CREATE TABLE unsalted (
 keyA BIGINT NOT NULL,
 keyB BIGINT NOT NULL,
 val SMALLINT,
 CONSTRAINT pk PRIMARY KEY (keyA, keyB)
 );

EXPLAIN
 SELECT /*+ USE_SORT_MERGE_JOIN */ 
 t1.val v1, t2.val v2, COUNT(\*) c 
 FROM unsalted t1 JOIN unsalted t2 
 ON (t1.keyA = t2.keyA) 
 GROUP BY t1.val, t2.val;
 
+-+++--+
|PLAN|EST_BYTES_READ|EST_ROWS_READ| |

+-+++--+
|SORT-MERGE-JOIN (INNER) TABLES|null|null| |
|    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|AND|null|null| |
|    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
|CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |

+-+++--+

  was:
A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
aggregation in some cases, for improved performance.

When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
aggregation. It instead performs a CLIENT SORT followed by a CLIENT AGGREGATE. 
The performance can be improved if (a) the GROUP BY output does not need to be 
sorted, and (b) the GROUP BY input is large enough and has low cardinality.

The hash aggregation can initially be a hint. Here is an example from Phoenix 
4.13.1 that would benefit from hash aggregation if the GROUP BY input is large 
with low cardinality.

CREATE TABLE unsalted (
 keyA BIGINT NOT NULL,
 keyB BIGINT NOT NULL,
 val SMALLINT,
 CONSTRAINT pk PRIMARY KEY (keyA, keyB)
 );

EXPLAIN
 SELECT /*+ USE_SORT_MERGE_JOIN */ 
 t1.val v1, t2.val v2, COUNT(*) c 
 FROM unsalted t1 JOIN unsalted t2 
 ON (t1.keyA = t2.keyA) 
 GROUP BY t1.val, t2.val;
 
+-+++--+
|PLAN|EST_BYTES_READ|EST_ROWS_READ| |

+-+++--+
|SORT-MERGE-JOIN (INNER) TABLES|null|null| |
|    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|AND|null|null| |
|    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
|CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |

+-+++--+


> Support client-side hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>  keyA BIGINT NOT NULL,
>  keyB BIGINT NOT NULL,
>  val SMALLINT,
>  CONSTRAINT pk PRIMARY KEY (keyA, keyB)
>  );
> EXPLAIN
>  SELECT /*+ USE_SORT_MERGE_JOIN */ 
>  t1.val v1, t2.val v2, COUNT(\*) c 
>  FROM unsalted t1 JOIN unsalted t2 
>  ON (t1.keyA = t2.keyA) 
>  GROUP BY t1.val, t2.val;
>  
> +-+++--+
> |PLAN|EST_BYTES_READ|EST_ROWS_READ| |
> 

[jira] [Commented] (PHOENIX-4751) Support client-side hash aggregation with SORT_MERGE_JOIN

2018-05-23 Thread Gerald Sangudi (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488387#comment-16488387
 ] 

Gerald Sangudi commented on PHOENIX-4751:
-

[~jamestaylor], my understanding is that SORT-MERGE-JOIN takes place on the 
client. In the EXPLAIN plan, I see CLIENT SCANs as part of the SORT-MERGE-JOIN. 
If that is the case, the results of the SORT-MERGE-JOIN might be in client-side 
memory before performing the GROUP BY. Depending on the data size, would the 
client be able to aggregate these faster than writing them back to a temp table 
on the region servers for aggregation?

(2) Is SpillableGroupByCache currently used anywhere, e.g. in server-side hash 
aggregation?

> Support client-side hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>  keyA BIGINT NOT NULL,
>  keyB BIGINT NOT NULL,
>  val SMALLINT,
>  CONSTRAINT pk PRIMARY KEY (keyA, keyB)
>  );
> EXPLAIN
>  SELECT /*+ USE_SORT_MERGE_JOIN */ 
>  t1.val v1, t2.val v2, COUNT(*) c 
>  FROM unsalted t1 JOIN unsalted t2 
>  ON (t1.keyA = t2.keyA) 
>  GROUP BY t1.val, t2.val;
>  
> +-+++--+
> |PLAN|EST_BYTES_READ|EST_ROWS_READ| |
> +-+++--+
> |SORT-MERGE-JOIN (INNER) TABLES|null|null| |
> |    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
> |AND|null|null| |
> |    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
> |CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
> |CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |
> +-+++--+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4751) Support client-side hash aggregation with SORT_MERGE_JOIN

2018-05-23 Thread Gerald Sangudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gerald Sangudi updated PHOENIX-4751:

Description: 
A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
aggregation in some cases, for improved performance.

When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
aggregation. It instead performs a CLIENT SORT followed by a CLIENT AGGREGATE. 
The performance can be improved if (a) the GROUP BY output does not need to be 
sorted, and (b) the GROUP BY input is large enough and has low cardinality.

The hash aggregation can initially be a hint. Here is an example from Phoenix 
4.13.1 that would benefit from hash aggregation if the GROUP BY input is large 
with low cardinality.

CREATE TABLE unsalted (
 keyA BIGINT NOT NULL,
 keyB BIGINT NOT NULL,
 val SMALLINT,
 CONSTRAINT pk PRIMARY KEY (keyA, keyB)
 );

EXPLAIN
 SELECT /*+ USE_SORT_MERGE_JOIN */ 
 t1.val v1, t2.val v2, COUNT(*) c 
 FROM unsalted t1 JOIN unsalted t2 
 ON (t1.keyA = t2.keyA) 
 GROUP BY t1.val, t2.val;
 
+-+++--+
|PLAN|EST_BYTES_READ|EST_ROWS_READ| |

+-+++--+
|SORT-MERGE-JOIN (INNER) TABLES|null|null| |
|    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|AND|null|null| |
|    CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
|CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |

+-+++--+

  was:
A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
aggregation in some cases, for improved performance.

When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
aggregation. It instead performs a CLIENT SORT followed by a CLIENT AGGREGATE. 
The performance can be improved if (a) the GROUP BY output does not need to be 
sorted, and (b) the GROUP BY input is large enough and has low cardinality.

The hash aggregation can initially be a hint. Here is an example from Phoenix 
4.13.1 that would benefit from hash aggregation if the GROUP BY input is large 
with low cardinality.

CREATE TABLE unsalted (
   keyA BIGINT NOT NULL,
   keyB BIGINT NOT NULL,
   val SMALLINT,
   CONSTRAINT pk PRIMARY KEY (keyA, keyB)
);

EXPLAIN
SELECT /*+ USE_SORT_MERGE_JOIN */ 
t1.val v1, t2.val v2, COUNT(\*) c 
FROM unsalted t1 JOIN unsalted t2 
ON (t1.keyA = t2.keyA) 
GROUP BY t1.val, t2.val;
++-++--+
|PLAN| EST_BYTES_READ  
| EST_ROWS_READ  |  |
++-++--+
| SORT-MERGE-JOIN (INNER) TABLES | null
| null   |  |
| CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED  | null
| null   |  |
| AND| null
| null   |  |
| CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED  | null
| null   |  |
| CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]  | null
| null   |  |
| CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]| null
| null   |  |
++-++--+


> Support client-side hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>  keyA BIGINT NOT NULL,
>  keyB BIGINT NOT NULL,
>  val SMALLINT,
>  CONSTRAINT pk PRIMARY KEY (keyA, keyB)
>  );
> EXPLAIN
>  SELECT /*+ 

[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-23 Thread tony kerz (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488339#comment-16488339
 ] 

tony kerz commented on PHOENIX-1567:


ah, thanks for the feedback james and josh. a little further analysis clarifies:

{{org.apache.phoenix:phoenix-core:4.13.1-HBase-1.3}} in maven-central *is-not* 
an uber-jar

{{phoenix-client }}*is* an uber-jar, but is obtained from distributions, and 
*is-not* published to maven

i was having an issue connecting to an hdp install where i was told that some 
hortonworks patches were required, so i grabbed the following from the hdp 
maven-repo {{http://nexus-private.hortonworks.com/nexus/content/groups/public}}

{{org.apache.phoenix:phoenix-core:4.7.0.2.6.1.40-4}}

inexplicably, the above hdp version of {{phoenix-core}} *is* an uber-jar 
similar to {{phoenix-client}} which gave me grief.

i apologize for the confusion on this thread, i will take up my specific issue 
with hortonworks.

 

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2896) Support encoded column qualifiers per column family

2018-05-23 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488289#comment-16488289
 ] 

Thomas D'Silva commented on PHOENIX-2896:
-

[~samarthjain]

I was looking through the code and it look like we ended up storing the counter 
per column family. Do you know how EncodedColumnQualifierCellsList would handle 
cells from different column families with the same encoded column qualifiers?

> Support encoded column qualifiers per column family 
> 
>
> Key: PHOENIX-2896
> URL: https://issues.apache.org/jira/browse/PHOENIX-2896
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Samarth Jain
>Priority: Major
> Fix For: 4.10.0
>
>
> This allows us to reduce the number of null values in the stored array that 
> contains all columns for a give column family for the 
> COLUMNS_STORED_IN_SINGLE_CELL Storage Scheme.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4752) Phoenix with Procedure v2

2018-05-23 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488247#comment-16488247
 ] 

Josh Elser commented on PHOENIX-4752:
-

[~stack] fyi

> Phoenix with Procedure v2
> -
>
> Key: PHOENIX-4752
> URL: https://issues.apache.org/jira/browse/PHOENIX-4752
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Priority: Major
>
> [Procedure Framework 
> (Pv2)|https://docs.google.com/document/d/1QLXlVERKt5EMbx_EL3Y2u0j64FN-_TrVoM5WWxIXh6o/edit#heading=h.df9krsl9k16]
>  framework in HBase (available from HBase 1.1 and up) allows users to build 
> multi step procedures with built in fault tolerance. The framework was 
> originally inspired from Fault tolerant executor (FATE) framework from Apache 
> Accumulo. This is essentially achieved by declaring the procedure as a series 
> of idempotent execution functions and providing persistence for the result of 
> each execution. In case of failures, the new owner can first determine the 
> in-flight procedures and take on its execution. The framework also allows to 
> build a tree (DAG) of procedures i.e. a procedure can have one or more child 
> procedures. The next step of procedure will only be executed once all its 
> child process have successfully completed execution. This can be leveraged by 
> Apache Phoenix for several of its functions.
> I created a doc [Phoenix with 
> Procedurev2|https://docs.google.com/document/d/1vmaK7Yz7TTbzuBHPR09d4LqtfAj6tkPT0-DjS6nQ6QA/edit]
>  to consolidate some of the use cases. Please provide comments / ideas to 
> improve upon it. Also comment on the feasibility vs usefulness of doing it.
> [~elserj] [~giacomotaylor] [~mbertozzi]
> FYI [~apurtell] [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4728) ARRAY_APPEND and ARRAY_REMOVE should work with null column value

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488202#comment-16488202
 ] 

ASF GitHub Bot commented on PHOENIX-4728:
-

Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/301#discussion_r190428434
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -549,7 +549,7 @@ public MutationPlan compile(UpsertStatement upsert) 
throws SQLException {
 select = SelectStatement.create(select, hint);
 // Pass scan through if same table in upsert and select so 
that projection is computed correctly
 // Use optimizer to choose the best plan
-QueryCompiler compiler = new QueryCompiler(statement, select, 
selectResolver, targetColumns, parallelIteratorFactoryToBe, new 
SequenceManager(statement), false, false, null);
+QueryCompiler compiler = new QueryCompiler(statement, select, 
selectResolver, targetColumns, parallelIteratorFactoryToBe, new 
SequenceManager(statement), true, false, null);
--- End diff --

"Tuple projection" was first used for join queries so that columns are 
accessed based on positions instead of names. We later applied this to 
single-table queries, but for some reason (reason that I can't recall right 
now), we wanted to avoid tuple projection in UPSERT. If this change won't cause 
any existing test failure, I think it's just fine.


> ARRAY_APPEND and ARRAY_REMOVE should work with null column value
> 
>
> Key: PHOENIX-4728
> URL: https://issues.apache.org/jira/browse/PHOENIX-4728
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Xavier Jodoin
>Priority: Major
>
> ARRAY_APPEND and ARRAY_REMOVE should create the array value when it's null
> Test case:
> create table test_array (
> ID VARCHAR NOT NULL,
> MYARRAY VARCHAR ARRAY
> CONSTRAINT testpk PRIMARY KEY (ID)
> );
> upsert into test_array (id) values ('test');
> upsert into test_array select id,array_append(myarray,'testValue') from 
> test_array;
> select ID,ARRAY_TO_STRING(MYARRAY, ',')  from test_array;
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #301: PHOENIX-4728 The upsert select must project tuple...

2018-05-23 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/301#discussion_r190428434
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -549,7 +549,7 @@ public MutationPlan compile(UpsertStatement upsert) 
throws SQLException {
 select = SelectStatement.create(select, hint);
 // Pass scan through if same table in upsert and select so 
that projection is computed correctly
 // Use optimizer to choose the best plan
-QueryCompiler compiler = new QueryCompiler(statement, select, 
selectResolver, targetColumns, parallelIteratorFactoryToBe, new 
SequenceManager(statement), false, false, null);
+QueryCompiler compiler = new QueryCompiler(statement, select, 
selectResolver, targetColumns, parallelIteratorFactoryToBe, new 
SequenceManager(statement), true, false, null);
--- End diff --

"Tuple projection" was first used for join queries so that columns are 
accessed based on positions instead of names. We later applied this to 
single-table queries, but for some reason (reason that I can't recall right 
now), we wanted to avoid tuple projection in UPSERT. If this change won't cause 
any existing test failure, I think it's just fine.


---


[jira] [Commented] (PHOENIX-2314) Cannot prepare parameterized statement with a 'like' predicate

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488094#comment-16488094
 ] 

Hudson commented on PHOENIX-2314:
-

SUCCESS: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1903 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1903/])
PHOENIX-2314 Cannot prepare parameterized statement with a 'like' (elserj: rev 
fa18c8f0992407e0df11c060a17c3346b723b3ab)
* (edit) 
phoenix-queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/LikeExpressionIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/ExpressionCompiler.java


> Cannot prepare parameterized statement with a 'like' predicate
> --
>
> Key: PHOENIX-2314
> URL: https://issues.apache.org/jira/browse/PHOENIX-2314
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: Using Fiddler or cURL to communicate with a Phoenix 
> 4.5.2 queryserver using Avatica wire protocol
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Major
>  Labels: avatica, phoenix
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2314.2.patch, PHOENIX-2314.3.patch, 
> PHOENIX-2314.4.patch, PHOENIX-2314.5.patch, PHOENIX-2314.patch
>
>
> *POST*
> {noformat}
> request: { "request":"prepare", 
> "connectionId":"1646a1b9-334e-4a21-ade8-47c3d0c8e5a3", "sql":"select * from 
> emp where first_name like ?", "maxRowCount":-1 }
> Host: 192.168.203.156:8765
> Content-Length: 0
> {noformat}
> _select * from emp where first_name like ?_
> *RESPONSE*
> {noformat}
> HTTP/1.1 500 org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. BOOLEAN for null
> Date: Wed, 07 Oct 2015 22:42:26 GMT
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html; charset=ISO-8859-1
> Content-Length: 368
> Connection: close
> Server: Jetty(9.2.z-SNAPSHOT)
> 
> 
> 
> Error 500 
> 
> 
> HTTP ERROR: 500
> Problem accessing /. Reason:
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
> Type mismatch. BOOLEAN for null
> Powered by Jetty://
> 
> 
> {noformat}
> _org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. BOOLEAN for null_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4752) Phoenix with Procedure v2

2018-05-23 Thread Karan Mehta (JIRA)
Karan Mehta created PHOENIX-4752:


 Summary: Phoenix with Procedure v2
 Key: PHOENIX-4752
 URL: https://issues.apache.org/jira/browse/PHOENIX-4752
 Project: Phoenix
  Issue Type: Improvement
Reporter: Karan Mehta


[Procedure Framework 
(Pv2)|https://docs.google.com/document/d/1QLXlVERKt5EMbx_EL3Y2u0j64FN-_TrVoM5WWxIXh6o/edit#heading=h.df9krsl9k16]
 framework in HBase (available from HBase 1.1 and up) allows users to build 
multi step procedures with built in fault tolerance. The framework was 
originally inspired from Fault tolerant executor (FATE) framework from Apache 
Accumulo. This is essentially achieved by declaring the procedure as a series 
of idempotent execution functions and providing persistence for the result of 
each execution. In case of failures, the new owner can first determine the 
in-flight procedures and take on its execution. The framework also allows to 
build a tree (DAG) of procedures i.e. a procedure can have one or more child 
procedures. The next step of procedure will only be executed once all its child 
process have successfully completed execution. This can be leveraged by Apache 
Phoenix for several of its functions.

I created a doc [Phoenix with 
Procedurev2|https://docs.google.com/document/d/1vmaK7Yz7TTbzuBHPR09d4LqtfAj6tkPT0-DjS6nQ6QA/edit]
 to consolidate some of the use cases. Please provide comments / ideas to 
improve upon it. Also comment on the feasibility vs usefulness of doing it.

[~elserj] [~giacomotaylor] [~mbertozzi]

FYI [~apurtell] [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4728) ARRAY_APPEND and ARRAY_REMOVE should work with null column value

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487998#comment-16487998
 ] 

ASF GitHub Bot commented on PHOENIX-4728:
-

Github user xjodoin commented on the issue:

https://github.com/apache/phoenix/pull/301
  
No the problem come from the querycompiler for the upsertselect the 
behavior is different than a simple select query 

Le 23 mai 2018 16:04:46 HAE, James Taylor  a 
écrit :
>JamesRTaylor commented on this pull request.
>
>
>
>> @@ -549,7 +549,7 @@ public MutationPlan compile(UpsertStatement
>upsert) throws SQLException {
> select = SelectStatement.create(select, hint);
>// Pass scan through if same table in upsert and select so that
>projection is computed correctly
> // Use optimizer to choose the best plan
>-QueryCompiler compiler = new QueryCompiler(statement,
>select, selectResolver, targetColumns, parallelIteratorFactoryToBe, new
>SequenceManager(statement), false, false, null);
>+QueryCompiler compiler = new QueryCompiler(statement,
>select, selectResolver, targetColumns, parallelIteratorFactoryToBe, new
>SequenceManager(statement), true, false, null);
>
>This seems like too general of a change for the specific issue you're
>trying to fix for ARRAY_APPEND. I'm also not sure *why* it would impact
>it. Can't you make changes to ArrayAppendFunction or it's base class to
>get the desired affect?
>
>Any opinions, @maryannxue. Do you remember when/why we need this
>projectTuples boolean for QueryCompiler?
>
>-- 
>You are receiving this because you authored the thread.
>Reply to this email directly or view it on GitHub:
>https://github.com/apache/phoenix/pull/301#pullrequestreview-122745792

-- 
Envoyé de mon appareil Android avec K-9 Mail. Veuillez excuser ma brièveté.


> ARRAY_APPEND and ARRAY_REMOVE should work with null column value
> 
>
> Key: PHOENIX-4728
> URL: https://issues.apache.org/jira/browse/PHOENIX-4728
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Xavier Jodoin
>Priority: Major
>
> ARRAY_APPEND and ARRAY_REMOVE should create the array value when it's null
> Test case:
> create table test_array (
> ID VARCHAR NOT NULL,
> MYARRAY VARCHAR ARRAY
> CONSTRAINT testpk PRIMARY KEY (ID)
> );
> upsert into test_array (id) values ('test');
> upsert into test_array select id,array_append(myarray,'testValue') from 
> test_array;
> select ID,ARRAY_TO_STRING(MYARRAY, ',')  from test_array;
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix issue #301: PHOENIX-4728 The upsert select must project tuples

2018-05-23 Thread xjodoin
Github user xjodoin commented on the issue:

https://github.com/apache/phoenix/pull/301
  
No the problem come from the querycompiler for the upsertselect the 
behavior is different than a simple select query 

Le 23 mai 2018 16:04:46 HAE, James Taylor  a 
écrit :
>JamesRTaylor commented on this pull request.
>
>
>
>> @@ -549,7 +549,7 @@ public MutationPlan compile(UpsertStatement
>upsert) throws SQLException {
> select = SelectStatement.create(select, hint);
>// Pass scan through if same table in upsert and select so that
>projection is computed correctly
> // Use optimizer to choose the best plan
>-QueryCompiler compiler = new QueryCompiler(statement,
>select, selectResolver, targetColumns, parallelIteratorFactoryToBe, new
>SequenceManager(statement), false, false, null);
>+QueryCompiler compiler = new QueryCompiler(statement,
>select, selectResolver, targetColumns, parallelIteratorFactoryToBe, new
>SequenceManager(statement), true, false, null);
>
>This seems like too general of a change for the specific issue you're
>trying to fix for ARRAY_APPEND. I'm also not sure *why* it would impact
>it. Can't you make changes to ArrayAppendFunction or it's base class to
>get the desired affect?
>
>Any opinions, @maryannxue. Do you remember when/why we need this
>projectTuples boolean for QueryCompiler?
>
>-- 
>You are receiving this because you authored the thread.
>Reply to this email directly or view it on GitHub:
>https://github.com/apache/phoenix/pull/301#pullrequestreview-122745792

-- 
Envoyé de mon appareil Android avec K-9 Mail. Veuillez excuser ma 
brièveté.


---


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487961#comment-16487961
 ] 

James Taylor commented on PHOENIX-1567:
---

Is it possible for you to build phoenix yourself and then manage the 
dependencies on your own? I'm not aware of us publishing uber jars to maven.

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-23 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487962#comment-16487962
 ] 

Josh Elser commented on PHOENIX-1567:
-

{quote}i am presently experiencing some issues with a version of the fat client 
jar (uber-jar) as published in maven because of the fact that it bundles in 
dependencies instead of calling them out via standard maven practices.
{quote}
There are two sides of the coin and they are both valid.

phoenix-client is an all-in-one uber-jar (with relocated dependencies) as many 
JDBC applications require a single jar. phoenix-core is a simple Maven artifact 
that lists all dependencies via "standard maven practices".

We cannot possibly handle all situations, but we can handle most. For the 
exceptional cases, users can build their own artifact that suits their needs.

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4728) ARRAY_APPEND and ARRAY_REMOVE should work with null column value

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487960#comment-16487960
 ] 

ASF GitHub Bot commented on PHOENIX-4728:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/301#discussion_r190381688
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -549,7 +549,7 @@ public MutationPlan compile(UpsertStatement upsert) 
throws SQLException {
 select = SelectStatement.create(select, hint);
 // Pass scan through if same table in upsert and select so 
that projection is computed correctly
 // Use optimizer to choose the best plan
-QueryCompiler compiler = new QueryCompiler(statement, select, 
selectResolver, targetColumns, parallelIteratorFactoryToBe, new 
SequenceManager(statement), false, false, null);
+QueryCompiler compiler = new QueryCompiler(statement, select, 
selectResolver, targetColumns, parallelIteratorFactoryToBe, new 
SequenceManager(statement), true, false, null);
--- End diff --

This seems like too general of a change for the specific issue you're 
trying to fix for ARRAY_APPEND. I'm also not sure *why* it would impact it. 
Can't you make changes to ArrayAppendFunction or it's base class to get the 
desired affect?

Any opinions, @maryannxue. Do you remember when/why we need this 
projectTuples boolean for QueryCompiler?


> ARRAY_APPEND and ARRAY_REMOVE should work with null column value
> 
>
> Key: PHOENIX-4728
> URL: https://issues.apache.org/jira/browse/PHOENIX-4728
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Xavier Jodoin
>Priority: Major
>
> ARRAY_APPEND and ARRAY_REMOVE should create the array value when it's null
> Test case:
> create table test_array (
> ID VARCHAR NOT NULL,
> MYARRAY VARCHAR ARRAY
> CONSTRAINT testpk PRIMARY KEY (ID)
> );
> upsert into test_array (id) values ('test');
> upsert into test_array select id,array_append(myarray,'testValue') from 
> test_array;
> select ID,ARRAY_TO_STRING(MYARRAY, ',')  from test_array;
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #301: PHOENIX-4728 The upsert select must project tuple...

2018-05-23 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/301#discussion_r190381688
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -549,7 +549,7 @@ public MutationPlan compile(UpsertStatement upsert) 
throws SQLException {
 select = SelectStatement.create(select, hint);
 // Pass scan through if same table in upsert and select so 
that projection is computed correctly
 // Use optimizer to choose the best plan
-QueryCompiler compiler = new QueryCompiler(statement, select, 
selectResolver, targetColumns, parallelIteratorFactoryToBe, new 
SequenceManager(statement), false, false, null);
+QueryCompiler compiler = new QueryCompiler(statement, select, 
selectResolver, targetColumns, parallelIteratorFactoryToBe, new 
SequenceManager(statement), true, false, null);
--- End diff --

This seems like too general of a change for the specific issue you're 
trying to fix for ARRAY_APPEND. I'm also not sure *why* it would impact it. 
Can't you make changes to ArrayAppendFunction or it's base class to get the 
desired affect?

Any opinions, @maryannxue. Do you remember when/why we need this 
projectTuples boolean for QueryCompiler?


---


[jira] [Comment Edited] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-23 Thread tony kerz (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487947#comment-16487947
 ] 

tony kerz edited comment on PHOENIX-1567 at 5/23/18 7:58 PM:
-

i am presently experiencing some issues with a version of the fat client jar 
(uber-jar) as published in maven because of the fact that it bundles in 
dependencies instead of calling them out via standard maven practices.

this can lead to complicated issues such as conflicts between versions of 
classes in the uber-jar and versions required by peer libraries

for instance, i just ran into an issue around gson and groovy classes bundled 
in the uber-jar which clash with versions required by some spring framework 
packages.

uber-jars, while potentially useful for users who don't use package-management 
tools like maven, gradle, sbt, are troublesome for users who do use these tools 
because they thwart the tool's capabilities to manage things like version 
reconciliation. 

i would also suggest that most modern users do use package-management such that 
uber-jars aren't addressing the needs of the broadest base.

 

 


was (Author: tony-kerz):
i am presently experiencing some issues with a version of the fat client jar 
(uber-jar) as published in maven because of the fact that it bundles in 
dependencies instead of calling them out via standard maven practices.

this can lead to complicated issues such as conflicts between versions of 
classes in the uber-jar and versions required by peer libraries

for instance, i just ran into an issue around gson classes bundled in the 
uber-jar which clash with versions required by some spring framework packages.

uber-jars, while potentially useful for users who don't use package-management 
tools like maven, gradle, sbt, are troublesome for users who do use these tools 
because they thwart the tool's capabilities to manage things like version 
reconciliation. 

i would also suggest that most modern users do use package-management such that 
uber-jars aren't addressing the needs of the broadest base.

 

 

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-05-23 Thread tony kerz (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487947#comment-16487947
 ] 

tony kerz commented on PHOENIX-1567:


i am presently experiencing some issues with a version of the fat client jar 
(uber-jar) as published in maven because of the fact that it bundles in 
dependencies instead of calling them out via standard maven practices.

this can lead to complicated issues such as conflicts between versions of 
classes in the uber-jar and versions required by peer libraries

for instance, i just ran into an issue around gson classes bundled in the 
uber-jar which clash with versions required by some spring framework packages.

uber-jars, while potentially useful for users who don't use package-management 
tools like maven, gradle, sbt, are troublesome for users who do use these tools 
because they thwart the tool's capabilities to manage things like version 
reconciliation. 

i would also suggest that most modern users do use package-management such that 
uber-jars aren't addressing the needs of the broadest base.

 

 

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Priority: Major
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-2314) Cannot prepare parameterized statement with a 'like' predicate

2018-05-23 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-2314.
-
   Resolution: Fixed
Fix Version/s: 5.0.0

Easily came back to 0.98, 1.1 and 1.2. I've adjusted the fixVersion to be 4.14 
since this will be the first release that it actually is available in all HBase 
versions for a Phoenix release.

> Cannot prepare parameterized statement with a 'like' predicate
> --
>
> Key: PHOENIX-2314
> URL: https://issues.apache.org/jira/browse/PHOENIX-2314
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: Using Fiddler or cURL to communicate with a Phoenix 
> 4.5.2 queryserver using Avatica wire protocol
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Major
>  Labels: avatica, phoenix
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-2314.2.patch, PHOENIX-2314.3.patch, 
> PHOENIX-2314.4.patch, PHOENIX-2314.5.patch, PHOENIX-2314.patch
>
>
> *POST*
> {noformat}
> request: { "request":"prepare", 
> "connectionId":"1646a1b9-334e-4a21-ade8-47c3d0c8e5a3", "sql":"select * from 
> emp where first_name like ?", "maxRowCount":-1 }
> Host: 192.168.203.156:8765
> Content-Length: 0
> {noformat}
> _select * from emp where first_name like ?_
> *RESPONSE*
> {noformat}
> HTTP/1.1 500 org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. BOOLEAN for null
> Date: Wed, 07 Oct 2015 22:42:26 GMT
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html; charset=ISO-8859-1
> Content-Length: 368
> Connection: close
> Server: Jetty(9.2.z-SNAPSHOT)
> 
> 
> 
> Error 500 
> 
> 
> HTTP ERROR: 500
> Problem accessing /. Reason:
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
> Type mismatch. BOOLEAN for null
> Powered by Jetty://
> 
> 
> {noformat}
> _org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. BOOLEAN for null_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-2314) Cannot prepare parameterized statement with a 'like' predicate

2018-05-23 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-2314:

Fix Version/s: (was: 4.12.0)
   4.14.0

> Cannot prepare parameterized statement with a 'like' predicate
> --
>
> Key: PHOENIX-2314
> URL: https://issues.apache.org/jira/browse/PHOENIX-2314
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: Using Fiddler or cURL to communicate with a Phoenix 
> 4.5.2 queryserver using Avatica wire protocol
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Major
>  Labels: avatica, phoenix
> Fix For: 4.14.0
>
> Attachments: PHOENIX-2314.2.patch, PHOENIX-2314.3.patch, 
> PHOENIX-2314.4.patch, PHOENIX-2314.5.patch, PHOENIX-2314.patch
>
>
> *POST*
> {noformat}
> request: { "request":"prepare", 
> "connectionId":"1646a1b9-334e-4a21-ade8-47c3d0c8e5a3", "sql":"select * from 
> emp where first_name like ?", "maxRowCount":-1 }
> Host: 192.168.203.156:8765
> Content-Length: 0
> {noformat}
> _select * from emp where first_name like ?_
> *RESPONSE*
> {noformat}
> HTTP/1.1 500 org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. BOOLEAN for null
> Date: Wed, 07 Oct 2015 22:42:26 GMT
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html; charset=ISO-8859-1
> Content-Length: 368
> Connection: close
> Server: Jetty(9.2.z-SNAPSHOT)
> 
> 
> 
> Error 500 
> 
> 
> HTTP ERROR: 500
> Problem accessing /. Reason:
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
> Type mismatch. BOOLEAN for null
> Powered by Jetty://
> 
> 
> {noformat}
> _org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. BOOLEAN for null_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4742) DistinctPrefixFilter potentially seeks to lesser key when descending or null value

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487918#comment-16487918
 ] 

Hudson commented on PHOENIX-4742:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4742 DistinctPrefixFilter potentially seeks to lesser key when 
(jtaylor: rev 48b6f99acdeb91e3167e7beeed49747f7b7dcc6c)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/filter/DistinctPrefixFilter.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/filter/SkipScanFilter.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/RowKeySchema.java


> DistinctPrefixFilter potentially seeks to lesser key when descending or null 
> value
> --
>
> Key: PHOENIX-4742
> URL: https://issues.apache.org/jira/browse/PHOENIX-4742
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4742_v1.patch
>
>
> DistinctPrefixFilter seeks to a smaller key than the current key (which 
> causes an infinite loop in HBase 1.4 and seeks to every row in other HBase 
> versions). This happens when:
>  # Last column of distinct is descending. We currently always add a 0x01 
> byte, but since the separator byte if 0xFF when descending, the seek key is 
> too small.
>  # Last column value is null. In this case, instead of adding a 0x01 byte, we 
> need to increment in-place the null value of the last distinct column. 
> This was discovered due to 
> OrderByIT.testOrderByReverseOptimizationWithNUllsLastBug3491 hanging in 
> master.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4692) ArrayIndexOutOfBoundsException in ScanRanges.intersectScan

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487920#comment-16487920
 ] 

Hudson commented on PHOENIX-4692:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4692 ArrayIndexOutOfBoundsException in ScanRanges.intersectScan 
(maryannxue: rev 28b9de0da01b61e61c749ed433ddb995596b3e45)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SkipScanQueryIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java


> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> --
>
> Key: PHOENIX-4692
> URL: https://issues.apache.org/jira/browse/PHOENIX-4692
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4692-IT.patch, PHOENIX-4692_v1.patch, 
> PHOENIX-4692_v2.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>   at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>   at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:501)
>   at 
> org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62)
>   at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>   at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>   at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4737) Use position as column qualifier for APPEND_ONLY_SCHEMA

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487913#comment-16487913
 ] 

Hudson commented on PHOENIX-4737:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4737 Use position as column qualifier for APPEND_ONLY_SCHEMA (jtaylor: 
rev 22b02ef108a40eb24f69f200843675a91bc16bf9)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnEncodedBytesPropIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


> Use position as column qualifier for APPEND_ONLY_SCHEMA
> ---
>
> Key: PHOENIX-4737
> URL: https://issues.apache.org/jira/browse/PHOENIX-4737
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4737_v1.patch
>
>
> An easy way to prevent gaps in the column encoding used to define column 
> qualifiers is to use the position to define the column qualifier. This only 
> works if:
>  * You disallow removes of columns
>  * You disallow adding columns to the base table
> This is pretty easy to enforce and will enable column encoding to be used 
> effectively when a base table has many views.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4685) Properly handle connection caching for Phoenix inside RegionServers

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487915#comment-16487915
 ] 

Hudson commented on PHOENIX-4685:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4685 Properly handle connection caching for Phoenix inside 
(ankitsinghal59: rev 4c918352d1893bba46db2bdf08f468ca52fe2cba)
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java


> Properly handle connection caching for Phoenix inside RegionServers
> ---
>
> Key: PHOENIX-4685
> URL: https://issues.apache.org/jira/browse/PHOENIX-4685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4685.patch, PHOENIX-4685_5.x-HBase-2.0.patch, 
> PHOENIX-4685_addendum.patch, PHOENIX-4685_addendum2.patch, 
> PHOENIX-4685_addendum3.patch, PHOENIX-4685_addendum4.patch, 
> PHOENIX-4685_jstack, PHOENIX-4685_v2.patch, PHOENIX-4685_v3.patch, 
> PHOENIX-4685_v4.patch, PHOENIX-4685_v5.patch
>
>
> Currently trying to write data to indexed table failing with OOME where 
> unable to create native threads. But it's working fine with 4.7.x branches. 
> Found many threads created for meta lookup and shared threads and no space to 
> create threads. This is happening even with short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020] 
> index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
> at 
> 

[jira] [Commented] (PHOENIX-4704) Presplit index tables when building asynchronously

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487917#comment-16487917
 ] 

Hudson commented on PHOENIX-4704:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4704 Presplit index tables when building asynchronously (vincentpoon: 
rev 6ab9b372f16f37b11e657b6803c6a60007815824)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java


> Presplit index tables when building asynchronously
> --
>
> Key: PHOENIX-4704
> URL: https://issues.apache.org/jira/browse/PHOENIX-4704
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4704.master.v1.patch, 
> PHOENIX-4704.master.v2.patch
>
>
> For large data tables with many regions, if we build the index asynchronously 
> using the IndexTool, the index table will initial face a hotspot as all data 
> region mappers attempt to write to the sole new index region.  This can 
> potentially lead to the index getting disabled if writes to the index table 
> timeout during this hotspotting.
> We can add an optional step (or perhaps activate it based on the count of 
> regions in the data table) to the IndexTool to first do a MR job to gather 
> stats on the indexed column values, and then attempt to presplit the index 
> table before we do the actual index build MR job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4744) Reduce parallelism in integration test runs

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487919#comment-16487919
 ] 

Hudson commented on PHOENIX-4744:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4744 Reduce parallelism in integration test runs (jtaylor: rev 
58415e2f31617ec543cb01e8bc27ce44c4efbe0d)
* (edit) pom.xml


> Reduce parallelism in integration test runs
> ---
>
> Key: PHOENIX-4744
> URL: https://issues.apache.org/jira/browse/PHOENIX-4744
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0
>
>
> It seems to help the test runs pass to reduce the parallelism. I've tried 
> going from 8 to 4 and have had better luck.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4706) phoenix-core jar bundles dependencies unnecessarily

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487921#comment-16487921
 ] 

Hudson commented on PHOENIX-4706:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4706 Remove bundling dependencies into phoenix-core (elserj: rev 
ea9495192d2256b9f81a06ee327526836b30259b)
* (edit) phoenix-core/pom.xml


> phoenix-core jar bundles dependencies unnecessarily
> ---
>
> Key: PHOENIX-4706
> URL: https://issues.apache.org/jira/browse/PHOENIX-4706
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4706.001.patch
>
>
> Got a report from some users about extra dependencies being included inside 
> the phoenix-core jar. I was a little confused about this, but, sure enough, 
> it's happening.
> Seems like this was done a very long time ago, but I'm not sure that it's 
> really something we want to do since there is a dedicated phoenix-client jar 
> now..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4701) Write client-side metrics asynchronously to SYSTEM.LOG

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487912#comment-16487912
 ] 

Hudson commented on PHOENIX-4701:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4701 Write client-side metrics asynchronously to SYSTEM.LOG 
(ankitsinghal59: rev 7afaceb7e7355e59ae9465a02b812b230fc58edd)
* (add) phoenix-core/src/main/java/org/apache/phoenix/monitoring/MetricUtil.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryLoggerIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/log/LogLevel.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/compile/StatementContext.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/log/QueryLogInfo.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixRecordReader.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
* (edit) bin/hbase-site.xml
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/log/TableLogWriter.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixResultSet.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ParallelIterators.java
* (add) phoenix-core/src/main/java/org/apache/phoenix/log/QueryStatus.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/MetricType.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/log/RingBufferEvent.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/MemoryMetricsHolder.java
* (delete) phoenix-core/src/main/java/org/apache/phoenix/log/QueryLogState.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/ScanMetricsHolder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/MutationMetricQueue.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/log/LogWriter.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/log/QueryLoggerUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/log/RingBufferEventTranslator.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/OverAllQueryMetrics.java
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixRecordReader.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/log/QueryLogger.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/QueryUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixConnection.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/TaskExecutionMetricsHolder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionlessQueryServicesImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/ReadMetricQueue.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/iterate/SpoolingResultIteratorTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/SerialIterators.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/SpoolingMetricsHolder.java
PHOENIX-4701 Write client-side metrics asynchronously to (ankitsinghal59: rev 
50533ce387fc6cabb5aaccdbe6677a97b9debe73)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/CombinableMetricImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/MutationMetricQueue.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/MetricUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/SpoolingMetricsHolder.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/OverAllQueryMetrics.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/CombinableMetric.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/iterate/SpoolingResultIteratorTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/monitoring/ReadMetricQueue.java
* (edit) 

[jira] [Commented] (PHOENIX-3163) Split during global index creation may cause ERROR 201 error

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487914#comment-16487914
 ] 

Hudson commented on PHOENIX-3163:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-3163 Split during global index creation may cause ERROR 201 (jtaylor: 
rev 763c38bcaf8824588022d9311fec00d85239e80c)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java


> Split during global index creation may cause ERROR 201 error
> 
>
> Key: PHOENIX-3163
> URL: https://issues.apache.org/jira/browse/PHOENIX-3163
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3163_addendum1.patch, PHOENIX-3163_v1.patch, 
> PHOENIX-3163_v3.patch, PHOENIX-3163_v4.patch, PHOENIX-3163_v5.patch, 
> PHOENIX-3163_v6.patch
>
>
> When we create global index and split happen meanwhile there is a chance to 
> fail with ERROR 201:
> {noformat}
> 2016-08-08 15:55:17,248 INFO  [Thread-6] 
> org.apache.phoenix.iterate.BaseResultIterators(878): Failed to execute task 
> during cancel
> java.util.concurrent.ExecutionException: java.sql.SQLException: ERROR 201 
> (22000): Illegal data.
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:872)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:809)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:713)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:815)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:124)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:2823)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1079)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1382)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:330)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440)
>   at 
> org.apache.phoenix.hbase.index.write.TestIndexWriter$1.run(TestIndexWriter.java:93)
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:441)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:287)
>   at 
> org.apache.phoenix.schema.types.PUnsignedSmallint$UnsignedShortCodec.decodeShort(PUnsignedSmallint.java:146)
>   at 
> org.apache.phoenix.schema.types.PSmallint.toObject(PSmallint.java:104)
>   at org.apache.phoenix.schema.types.PSmallint.toObject(PSmallint.java:28)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:980)
>   at 
> org.apache.phoenix.schema.types.PUnsignedSmallint.toObject(PUnsignedSmallint.java:102)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:980)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:992)
>   at 
> org.apache.phoenix.schema.types.PDataType.coerceBytes(PDataType.java:830)
>   at 
> org.apache.phoenix.schema.types.PDecimal.coerceBytes(PDecimal.java:342)
>   at 
> org.apache.phoenix.schema.types.PDataType.coerceBytes(PDataType.java:810)
>   at 
> org.apache.phoenix.expression.CoerceExpression.evaluate(CoerceExpression.java:149)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> 

[jira] [Commented] (PHOENIX-4724) Efficient Equi-Depth histogram for streaming data

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487916#comment-16487916
 ] 

Hudson commented on PHOENIX-4724:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4724 Efficient Equi-Depth histogram for streaming data (vincentpoon: 
rev cb17adbbde56cacd43846ead2200e6606ed64ae8)
* (add) 
phoenix-core/src/test/java/org/apache/phoenix/util/EquiDepthStreamHistogramTest.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/util/EquiDepthStreamHistogram.java


> Efficient Equi-Depth histogram for streaming data
> -
>
> Key: PHOENIX-4724
> URL: https://issues.apache.org/jira/browse/PHOENIX-4724
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4724.v1.patch, PHOENIX-4724.v2.patch
>
>
> Equi-Depth histogram from 
> http://web.cs.ucla.edu/~zaniolo/papers/Histogram-EDBT2011-CamReady.pdf, but 
> without the sliding window - we assume a single window over the entire data 
> set.
> Used to generate the bucket boundaries of a histogram where each bucket has 
> the same # of items.
> This is useful, for example, for pre-splitting an index table, by feeding in 
> data from the indexed column.
> Works on streaming data - the histogram is dynamically updated for each new 
> value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4726) save index build timestamp -- for SYNC case only.

2018-05-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487911#comment-16487911
 ] 

Hudson commented on PHOENIX-4726:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1885 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1885/])
PHOENIX-4726 save index build timestamp -- for SYNC case only (vincentpoon: rev 
78594437e89fdb06f02cb29405193ef827596c49)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
Revert "PHOENIX-4726 save index build timestamp -- for SYNC case only" 
(vincentpoon: rev b539466bcc19232b7bc3eaff367d11b3f64f0228)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
PHOENIX-4726 save sync index build start timestamp (vincentpoon: rev 
1966edb1986a387580e0d06bd819b52ae9378ea1)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> save index build timestamp -- for SYNC case only.
> -
>
> Key: PHOENIX-4726
> URL: https://issues.apache.org/jira/browse/PHOENIX-4726
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4726.4.patch, PHOENIX-4726.patch.1, 
> PHOENIX-4726.patch.2, PHOENIX-4726.patch.3
>
>
> save index build timestamp, similar to ASYNC_REBUILD_TIMESTAMP,  or 
> ASYNC_CREATED_DATE
> ("SYNC_INDEX_CREATED_DATE" is my proposed name for SYNC case.)
>  
> Check IndexUtil.java for related code.
> The reason this can be useful is: We saw a case index state stuck in 'b' for 
> quite some long time. And without a timestamp to indicate where it started, 
> it's hard to tell if this is a legit running task or stuck...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4751) Support client-side hash aggregation with SORT_MERGE_JOIN

2018-05-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487897#comment-16487897
 ] 

James Taylor commented on PHOENIX-4751:
---

Couple thoughts on this:
 * see SpillableGroupByCache for an implementation of a spillable hash map that 
could be used for hash-based aggregation. The comment at the top references an 
algorithm described here: 
[http://db.inf.uni-tuebingen.de/files/teaching/ws1011/db2/db2-hash-indexes.pdf.]
 It's difficult to get good performance once you have to start spilling to disk.
 * another alternative would be to sort on the region server as this would 
distribute the sort across the cluster. The reason the sort is done at all is 
to make it scalable to do the final aggregation through a merge sort.
 * introduce a shuffle step in the query plan to prevent aggregating on the 
client. This could use an UPSERT SELECT command to write intermediate aggregate 
results to a temp table followed by running an aggregate query on the results. 
In this case, the results would be naturally sorted by HBase.

> Support client-side hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>keyA BIGINT NOT NULL,
>keyB BIGINT NOT NULL,
>val SMALLINT,
>CONSTRAINT pk PRIMARY KEY (keyA, keyB)
> );
> EXPLAIN
> SELECT /*+ USE_SORT_MERGE_JOIN */ 
> t1.val v1, t2.val v2, COUNT(\*) c 
> FROM unsalted t1 JOIN unsalted t2 
> ON (t1.keyA = t2.keyA) 
> GROUP BY t1.val, t2.val;
> ++-++--+
> |PLAN| EST_BYTES_READ 
>  | EST_ROWS_READ  |  |
> ++-++--+
> | SORT-MERGE-JOIN (INNER) TABLES | null   
>  | null   |  |
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED  | null   
>  | null   |  |
> | AND| null   
>  | null   |  |
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED  | null   
>  | null   |  |
> | CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]  | null   
>  | null   |  |
> | CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]| null   
>  | null   |  |
> ++-++--+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-2582) Prevent need of catch up query when creating non transactional index

2018-05-23 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-2582:
---

Assignee: (was: Thomas D'Silva)

> Prevent need of catch up query when creating non transactional index
> 
>
> Key: PHOENIX-2582
> URL: https://issues.apache.org/jira/browse/PHOENIX-2582
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Priority: Major
>
> If we create an index while we are upserting rows to the table its possible 
> we can miss writing corresponding rows to the index table. 
> If a region server is writing a batch of rows and we create an index just 
> before the batch is written we will miss writing that batch to the index 
> table. This is because we run the inital UPSERT SELECT to populate the index 
> with an SCN that we get from the server which will be before the timestamp 
> the batch of rows is written. 
> We need to figure out if there is a way to determine that are pending batches 
> have been written before running the UPSERT SELECT to do the initial index 
> population.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2582) Prevent need of catch up query when creating non transactional index

2018-05-23 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487789#comment-16487789
 ] 

Thomas D'Silva commented on PHOENIX-2582:
-

[~karanmehta93] this Jira would be a good one to work on if you are interested 
in looking into HBase Procedure.

> Prevent need of catch up query when creating non transactional index
> 
>
> Key: PHOENIX-2582
> URL: https://issues.apache.org/jira/browse/PHOENIX-2582
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
>
> If we create an index while we are upserting rows to the table its possible 
> we can miss writing corresponding rows to the index table. 
> If a region server is writing a batch of rows and we create an index just 
> before the batch is written we will miss writing that batch to the index 
> table. This is because we run the inital UPSERT SELECT to populate the index 
> with an SCN that we get from the server which will be before the timestamp 
> the batch of rows is written. 
> We need to figure out if there is a way to determine that are pending batches 
> have been written before running the UPSERT SELECT to do the initial index 
> population.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4751) Support client-side hash aggregation with SORT_MERGE_JOIN

2018-05-23 Thread Gerald Sangudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gerald Sangudi updated PHOENIX-4751:

Summary: Support client-side hash aggregation with SORT_MERGE_JOIN  (was: 
Support hash aggregation with SORT_MERGE_JOIN)

> Support client-side hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>keyA BIGINT NOT NULL,
>keyB BIGINT NOT NULL,
>val SMALLINT,
>CONSTRAINT pk PRIMARY KEY (keyA, keyB)
> );
> EXPLAIN
> SELECT /*+ USE_SORT_MERGE_JOIN */ 
> t1.val v1, t2.val v2, COUNT(\*) c 
> FROM unsalted t1 JOIN unsalted t2 
> ON (t1.keyA = t2.keyA) 
> GROUP BY t1.val, t2.val;
> ++-++--+
> |PLAN| EST_BYTES_READ 
>  | EST_ROWS_READ  |  |
> ++-++--+
> | SORT-MERGE-JOIN (INNER) TABLES | null   
>  | null   |  |
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED  | null   
>  | null   |  |
> | AND| null   
>  | null   |  |
> | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED  | null   
>  | null   |  |
> | CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]  | null   
>  | null   |  |
> | CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]| null   
>  | null   |  |
> ++-++--+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4751) Support hash aggregation with SORT_MERGE_JOIN

2018-05-23 Thread Gerald Sangudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gerald Sangudi updated PHOENIX-4751:

Description: 
A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
aggregation in some cases, for improved performance.

When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
aggregation. It instead performs a CLIENT SORT followed by a CLIENT AGGREGATE. 
The performance can be improved if (a) the GROUP BY output does not need to be 
sorted, and (b) the GROUP BY input is large enough and has low cardinality.

The hash aggregation can initially be a hint. Here is an example from Phoenix 
4.13.1 that would benefit from hash aggregation if the GROUP BY input is large 
with low cardinality.

CREATE TABLE unsalted (
   keyA BIGINT NOT NULL,
   keyB BIGINT NOT NULL,
   val SMALLINT,
   CONSTRAINT pk PRIMARY KEY (keyA, keyB)
);

EXPLAIN
SELECT /*+ USE_SORT_MERGE_JOIN */ 
t1.val v1, t2.val v2, COUNT(\*) c 
FROM unsalted t1 JOIN unsalted t2 
ON (t1.keyA = t2.keyA) 
GROUP BY t1.val, t2.val;
++-++--+
|PLAN| EST_BYTES_READ  
| EST_ROWS_READ  |  |
++-++--+
| SORT-MERGE-JOIN (INNER) TABLES | null
| null   |  |
| CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED  | null
| null   |  |
| AND| null
| null   |  |
| CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED  | null
| null   |  |
| CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]  | null
| null   |  |
| CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]| null
| null   |  |
++-++--+

  was:
A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
aggregation in some cases, for improved performance.

When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
aggregation. It instead performs a CLIENT SORT followed by a CLIENT AGGREGATE. 
The performance can be improved if (a) the GROUP BY output does not need to be 
sorted, and (b) the GROUP BY input is large enough and has low cardinality.

The hash aggregation can initially be a hint. Here is an example from Phoenix 
4.13.1 that would benefit from hash aggregation if the GROUP BY input is large 
with low cardinality.

{{CREATE TABLE unsalted (}}
{{ keyA BIGINT NOT NULL,}}
{{ keyB BIGINT NOT NULL,}}
{{ val SMALLINT,}}
{{ CONSTRAINT pk PRIMARY KEY (keyA, keyB)}}
{{ );}}

{{EXPLAIN}}
{{ SELECT /*+ USE_SORT_MERGE_JOIN */ }}
{{ t1.val v1, t2.val v2, COUNT(*) c }}
{{ FROM unsalted t1 JOIN unsalted t2 }}
{{ ON (t1.keyA = t2.keyA) }}
{{ GROUP BY t1.val, t2.val;}}
{{ 
+-+++--+}}{{}}
|PLAN|EST_BYTES_READ|EST_ROWS_READ| |

{{}}{{+-+++--+}}{{}}
|SORT-MERGE-JOIN (INNER) TABLES|null|null| |
|CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|AND|null|null| |
|CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
|CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |

{{}}{{+-+++--+}}


> Support hash aggregation with SORT_MERGE_JOIN
> -
>
> Key: PHOENIX-4751
> URL: https://issues.apache.org/jira/browse/PHOENIX-4751
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.13.1
>Reporter: Gerald Sangudi
>Priority: Major
>
> A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
> aggregation in some cases, for improved performance.
> When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
> aggregation. It instead performs a CLIENT SORT followed by a CLIENT 
> AGGREGATE. The performance can be improved if (a) the GROUP BY output does 
> not need to be sorted, and (b) the GROUP BY input is large enough and has low 
> cardinality.
> The hash aggregation can initially be a hint. Here is an example from Phoenix 
> 4.13.1 that would benefit from hash aggregation if the GROUP BY input is 
> large with low cardinality.
> CREATE TABLE unsalted (
>keyA BIGINT NOT NULL,
>keyB BIGINT NOT NULL,
>val SMALLINT,
>CONSTRAINT pk 

[jira] [Created] (PHOENIX-4751) Support hash aggregation with SORT_MERGE_JOIN

2018-05-23 Thread Gerald Sangudi (JIRA)
Gerald Sangudi created PHOENIX-4751:
---

 Summary: Support hash aggregation with SORT_MERGE_JOIN
 Key: PHOENIX-4751
 URL: https://issues.apache.org/jira/browse/PHOENIX-4751
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.0, 4.13.1
Reporter: Gerald Sangudi


A GROUP BY that follows a SORT_MERGE_JOIN should be able to use hash 
aggregation in some cases, for improved performance.

When a GROUP BY follows a SORT_MERGE_JOIN, the GROUP BY does not use hash 
aggregation. It instead performs a CLIENT SORT followed by a CLIENT AGGREGATE. 
The performance can be improved if (a) the GROUP BY output does not need to be 
sorted, and (b) the GROUP BY input is large enough and has low cardinality.

The hash aggregation can initially be a hint. Here is an example from Phoenix 
4.13.1 that would benefit from hash aggregation if the GROUP BY input is large 
with low cardinality.

{{CREATE TABLE unsalted (}}
{{ keyA BIGINT NOT NULL,}}
{{ keyB BIGINT NOT NULL,}}
{{ val SMALLINT,}}
{{ CONSTRAINT pk PRIMARY KEY (keyA, keyB)}}
{{ );}}

{{EXPLAIN}}
{{ SELECT /*+ USE_SORT_MERGE_JOIN */ }}
{{ t1.val v1, t2.val v2, COUNT(*) c }}
{{ FROM unsalted t1 JOIN unsalted t2 }}
{{ ON (t1.keyA = t2.keyA) }}
{{ GROUP BY t1.val, t2.val;}}
{{ 
+-+++--+}}{{}}
|PLAN|EST_BYTES_READ|EST_ROWS_READ| |

{{}}{{+-+++--+}}{{}}
|SORT-MERGE-JOIN (INNER) TABLES|null|null| |
|CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|AND|null|null| |
|CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER UNSALTED|null|null| |
|CLIENT SORTED BY [TO_DECIMAL(T1.VAL), T2.VAL]|null|null| |
|CLIENT AGGREGATE INTO DISTINCT ROWS BY [T1.VAL, T2.VAL]|null|null| |

{{}}{{+-+++--+}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4728) ARRAY_APPEND and ARRAY_REMOVE should work with null column value

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487715#comment-16487715
 ] 

ASF GitHub Bot commented on PHOENIX-4728:
-

GitHub user xjodoin opened a pull request:

https://github.com/apache/phoenix/pull/301

PHOENIX-4728 The upsert select must project tuples



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xjodoin/phoenix PHOENIX-4728

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/301.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #301


commit 0411931b6e25aec2d5825fd6d5b538a145558b6a
Author: Xavier Jodoin 
Date:   2018-05-23T17:29:43Z

PHOENIX-4728 The upsert select must project tuples




> ARRAY_APPEND and ARRAY_REMOVE should work with null column value
> 
>
> Key: PHOENIX-4728
> URL: https://issues.apache.org/jira/browse/PHOENIX-4728
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Xavier Jodoin
>Priority: Major
>
> ARRAY_APPEND and ARRAY_REMOVE should create the array value when it's null
> Test case:
> create table test_array (
> ID VARCHAR NOT NULL,
> MYARRAY VARCHAR ARRAY
> CONSTRAINT testpk PRIMARY KEY (ID)
> );
> upsert into test_array (id) values ('test');
> upsert into test_array select id,array_append(myarray,'testValue') from 
> test_array;
> select ID,ARRAY_TO_STRING(MYARRAY, ',')  from test_array;
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] phoenix pull request #301: PHOENIX-4728 The upsert select must project tuple...

2018-05-23 Thread xjodoin
GitHub user xjodoin opened a pull request:

https://github.com/apache/phoenix/pull/301

PHOENIX-4728 The upsert select must project tuples



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xjodoin/phoenix PHOENIX-4728

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/301.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #301


commit 0411931b6e25aec2d5825fd6d5b538a145558b6a
Author: Xavier Jodoin 
Date:   2018-05-23T17:29:43Z

PHOENIX-4728 The upsert select must project tuples




---


[GitHub] phoenix pull request #300: Omid transaction support in Phoenix

2018-05-23 Thread JamesRTaylor
GitHub user JamesRTaylor opened a pull request:

https://github.com/apache/phoenix/pull/300

Omid transaction support in Phoenix



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ohadshacham/phoenix 4.x-HBase-1.3-Omid-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/300.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #300


commit 1039f9d4ca0da5185d8babaa6681459e07da61ad
Author: Ohad Shacham 
Date:   2018-04-03T14:03:55Z

Add Omid support for Phoenix

commit a11bddcacb90d553f6c0fdab37e2c335f7f23ac4
Author: Ohad Shacham 
Date:   2018-05-02T11:55:47Z

Merge remote-tracking branch 'upstream/4.x-HBase-1.3' into 
4.x-HBase-1.3-Omid-2

commit 45e6b0e309d037c03221de249501fa7c4bc651db
Author: Ohad Shacham 
Date:   2018-05-08T07:16:19Z

Remove hard coded Omid

commit 18bea3907ef73c6f842ac5410ea4623be5a36b18
Author: Ohad Shacham 
Date:   2018-05-08T09:28:05Z

Merge remote-tracking branch 'upstream/4.x-HBase-1.3' into 
4.x-HBase-1.3-Omid-2

commit 38ab6f459e6174e150d88a146b4a35d9c1857fb6
Author: Ohad Shacham 
Date:   2018-05-15T08:19:05Z

some merge fixes

commit cbab9b72ee5c62d9512b9472b010f581e3866e21
Author: Ohad Shacham 
Date:   2018-05-22T13:04:56Z

Fix hbase config

commit 7c834994a862f5e2a0edfa560c0a1c4f047383ca
Author: Ohad Shacham 
Date:   2018-05-22T18:30:01Z

Partially revert the following commits.


https://github.com/ohadshacham/phoenix/commit/45e6b0e309d037c03221de249501fa7c4bc651db

https://github.com/ohadshacham/phoenix/commit/38ab6f459e6174e150d88a146b4a35d9c1857fb6

commit dc6de3ab12ad18e9dc5622521bfaf5a662e4d409
Author: Ohad Shacham 
Date:   2018-05-22T18:32:35Z

Merge commit '2015345a023f0adb59174443ec1328bb1399f11b' into 
4.x-HBase-1.3-Omid-2

commit 9eae4abdbc770381620e23f4793acb279cec2544
Author: Ohad Shacham 
Date:   2018-05-22T18:42:26Z

Change back to TEPHRA what needed for testing TEPHRA

commit 0c5dfd642b3738d2d09aa19f2e6d4cbb97852c20
Author: Ohad Shacham 
Date:   2018-05-23T12:14:37Z

remove unnecessary changes.




---


[GitHub] phoenix pull request #299: 4.x h base 1.3 omid 2

2018-05-23 Thread JamesRTaylor
Github user JamesRTaylor closed the pull request at:

https://github.com/apache/phoenix/pull/299


---


[jira] [Commented] (PHOENIX-2314) Cannot prepare parameterized statement with a 'like' predicate

2018-05-23 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487656#comment-16487656
 ] 

Josh Elser commented on PHOENIX-2314:
-

Yeah, I'm going to try to :)

> Cannot prepare parameterized statement with a 'like' predicate
> --
>
> Key: PHOENIX-2314
> URL: https://issues.apache.org/jira/browse/PHOENIX-2314
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: Using Fiddler or cURL to communicate with a Phoenix 
> 4.5.2 queryserver using Avatica wire protocol
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Major
>  Labels: avatica, phoenix
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2314.2.patch, PHOENIX-2314.3.patch, 
> PHOENIX-2314.4.patch, PHOENIX-2314.5.patch, PHOENIX-2314.patch
>
>
> *POST*
> {noformat}
> request: { "request":"prepare", 
> "connectionId":"1646a1b9-334e-4a21-ade8-47c3d0c8e5a3", "sql":"select * from 
> emp where first_name like ?", "maxRowCount":-1 }
> Host: 192.168.203.156:8765
> Content-Length: 0
> {noformat}
> _select * from emp where first_name like ?_
> *RESPONSE*
> {noformat}
> HTTP/1.1 500 org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. BOOLEAN for null
> Date: Wed, 07 Oct 2015 22:42:26 GMT
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html; charset=ISO-8859-1
> Content-Length: 368
> Connection: close
> Server: Jetty(9.2.z-SNAPSHOT)
> 
> 
> 
> Error 500 
> 
> 
> HTTP ERROR: 500
> Problem accessing /. Reason:
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
> Type mismatch. BOOLEAN for null
> Powered by Jetty://
> 
> 
> {noformat}
> _org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. BOOLEAN for null_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4750) Resolve server customizers and provide them to Avatica

2018-05-23 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-4750:
-
Summary: Resolve server customizers and provide them to Avatica  (was: 
Resolve server customizers and provide them Avatica)

> Resolve server customizers and provide them to Avatica
> --
>
> Key: PHOENIX-4750
> URL: https://issues.apache.org/jira/browse/PHOENIX-4750
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 5.0.0
>
>
> CALCITE-2284 allows finer grained customization of the underlying Avatica 
> HttpServer.
> Resolve server customizers on the PQS classpath and provide them to the 
> HttpServer builder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4750) Resolve server customizers and provide them Avatica

2018-05-23 Thread Alex Araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487556#comment-16487556
 ] 

Alex Araujo commented on PHOENIX-4750:
--

Placeholder until Avatica 1.12 is released. FYI [~elserj].

> Resolve server customizers and provide them Avatica
> ---
>
> Key: PHOENIX-4750
> URL: https://issues.apache.org/jira/browse/PHOENIX-4750
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 5.0.0
>
>
> CALCITE-2284 allows finer grained customization of the underlying Avatica 
> HttpServer.
> Resolve server customizers on the PQS classpath and provide them to the 
> HttpServer builder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4750) Resolve server customizers and provide them Avatica

2018-05-23 Thread Alex Araujo (JIRA)
Alex Araujo created PHOENIX-4750:


 Summary: Resolve server customizers and provide them Avatica
 Key: PHOENIX-4750
 URL: https://issues.apache.org/jira/browse/PHOENIX-4750
 Project: Phoenix
  Issue Type: Improvement
Reporter: Alex Araujo
Assignee: Alex Araujo
 Fix For: 5.0.0


CALCITE-2284 allows finer grained customization of the underlying Avatica 
HttpServer.

Resolve server customizers on the PQS classpath and provide them to the 
HttpServer builder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-2314) Cannot prepare parameterized statement with a 'like' predicate

2018-05-23 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487532#comment-16487532
 ] 

James Taylor commented on PHOENIX-2314:
---

Looks like the patch was never committed by [~kliew]. Patch needs to be rebased 
again. Will you rebase and commit, [~elserj]?

> Cannot prepare parameterized statement with a 'like' predicate
> --
>
> Key: PHOENIX-2314
> URL: https://issues.apache.org/jira/browse/PHOENIX-2314
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: Using Fiddler or cURL to communicate with a Phoenix 
> 4.5.2 queryserver using Avatica wire protocol
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Major
>  Labels: avatica, phoenix
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2314.2.patch, PHOENIX-2314.3.patch, 
> PHOENIX-2314.4.patch, PHOENIX-2314.5.patch, PHOENIX-2314.patch
>
>
> *POST*
> {noformat}
> request: { "request":"prepare", 
> "connectionId":"1646a1b9-334e-4a21-ade8-47c3d0c8e5a3", "sql":"select * from 
> emp where first_name like ?", "maxRowCount":-1 }
> Host: 192.168.203.156:8765
> Content-Length: 0
> {noformat}
> _select * from emp where first_name like ?_
> *RESPONSE*
> {noformat}
> HTTP/1.1 500 org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. BOOLEAN for null
> Date: Wed, 07 Oct 2015 22:42:26 GMT
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html; charset=ISO-8859-1
> Content-Length: 368
> Connection: close
> Server: Jetty(9.2.z-SNAPSHOT)
> 
> 
> 
> Error 500 
> 
> 
> HTTP ERROR: 500
> Problem accessing /. Reason:
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
> Type mismatch. BOOLEAN for null
> Powered by Jetty://
> 
> 
> {noformat}
> _org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. BOOLEAN for null_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4749) Allow SPNEGO to be disabled for client auth when using Kerberos with HBase

2018-05-23 Thread Alex Araujo (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487492#comment-16487492
 ] 

Alex Araujo commented on PHOENIX-4749:
--

[~elserj], mind taking a look?

> Allow SPNEGO to be disabled for client auth when using Kerberos with HBase
> --
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4749) Allow SPNEGO to be disabled for client auth when using Kerberos with HBase

2018-05-23 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-4749:
-
Attachment: PHOENIX-4749.patch

> Allow SPNEGO to be disabled for client auth when using Kerberos with HBase
> --
>
> Key: PHOENIX-4749
> URL: https://issues.apache.org/jira/browse/PHOENIX-4749
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Major
> Attachments: PHOENIX-4749.patch
>
>
> Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
> auth is enabled for HBase.
> Client authentication should be decoupled from HBase authentication. This 
> would allow for other client authentication mechanisms to be plugged in when 
> Kerberos is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4749) Allow SPNEGO to be disabled for client auth when using Kerberos with HBase

2018-05-23 Thread Alex Araujo (JIRA)
Alex Araujo created PHOENIX-4749:


 Summary: Allow SPNEGO to be disabled for client auth when using 
Kerberos with HBase
 Key: PHOENIX-4749
 URL: https://issues.apache.org/jira/browse/PHOENIX-4749
 Project: Phoenix
  Issue Type: Improvement
Reporter: Alex Araujo
Assignee: Alex Araujo


Phoenix Query Server forces SPNEGO auth (Kerberos) for clients when Kerberos 
auth is enabled for HBase.

Client authentication should be decoupled from HBase authentication. This would 
allow for other client authentication mechanisms to be plugged in when Kerberos 
is used for HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-2314) Cannot prepare parameterized statement with a 'like' predicate

2018-05-23 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reopened PHOENIX-2314:
-

[~jamestaylor] we should get this applied to the rest of the 4.x branches 
before 4.14 goes out..

Just putting it on your radar.

> Cannot prepare parameterized statement with a 'like' predicate
> --
>
> Key: PHOENIX-2314
> URL: https://issues.apache.org/jira/browse/PHOENIX-2314
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: Using Fiddler or cURL to communicate with a Phoenix 
> 4.5.2 queryserver using Avatica wire protocol
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Major
>  Labels: avatica, phoenix
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2314.2.patch, PHOENIX-2314.3.patch, 
> PHOENIX-2314.4.patch, PHOENIX-2314.5.patch, PHOENIX-2314.patch
>
>
> *POST*
> {noformat}
> request: { "request":"prepare", 
> "connectionId":"1646a1b9-334e-4a21-ade8-47c3d0c8e5a3", "sql":"select * from 
> emp where first_name like ?", "maxRowCount":-1 }
> Host: 192.168.203.156:8765
> Content-Length: 0
> {noformat}
> _select * from emp where first_name like ?_
> *RESPONSE*
> {noformat}
> HTTP/1.1 500 org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. BOOLEAN for null
> Date: Wed, 07 Oct 2015 22:42:26 GMT
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html; charset=ISO-8859-1
> Content-Length: 368
> Connection: close
> Server: Jetty(9.2.z-SNAPSHOT)
> 
> 
> 
> Error 500 
> 
> 
> HTTP ERROR: 500
> Problem accessing /. Reason:
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
> Type mismatch. BOOLEAN for null
> Powered by Jetty://
> 
> 
> {noformat}
> _org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. BOOLEAN for null_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-2314) Cannot prepare parameterized statement with a 'like' predicate

2018-05-23 Thread Johanes Anggara (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483987#comment-16483987
 ] 

Johanes Anggara edited comment on PHOENIX-2314 at 5/23/18 7:25 AM:
---

Any suggestion which version to use, to be able using "{color:#FF}*LIKE 
?*{color}" statement?


was (Author: ranggasama):
Any suggestion which version to use, to be able using LIKE statement?

> Cannot prepare parameterized statement with a 'like' predicate
> --
>
> Key: PHOENIX-2314
> URL: https://issues.apache.org/jira/browse/PHOENIX-2314
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
> Environment: Using Fiddler or cURL to communicate with a Phoenix 
> 4.5.2 queryserver using Avatica wire protocol
>Reporter: Kevin Liew
>Assignee: Kevin Liew
>Priority: Major
>  Labels: avatica, phoenix
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2314.2.patch, PHOENIX-2314.3.patch, 
> PHOENIX-2314.4.patch, PHOENIX-2314.5.patch, PHOENIX-2314.patch
>
>
> *POST*
> {noformat}
> request: { "request":"prepare", 
> "connectionId":"1646a1b9-334e-4a21-ade8-47c3d0c8e5a3", "sql":"select * from 
> emp where first_name like ?", "maxRowCount":-1 }
> Host: 192.168.203.156:8765
> Content-Length: 0
> {noformat}
> _select * from emp where first_name like ?_
> *RESPONSE*
> {noformat}
> HTTP/1.1 500 org.apache.phoenix.schema.TypeMismatchException: ERROR 203 
> (22005): Type mismatch. BOOLEAN for null
> Date: Wed, 07 Oct 2015 22:42:26 GMT
> Cache-Control: must-revalidate,no-cache,no-store
> Content-Type: text/html; charset=ISO-8859-1
> Content-Length: 368
> Connection: close
> Server: Jetty(9.2.z-SNAPSHOT)
> 
> 
> 
> Error 500 
> 
> 
> HTTP ERROR: 500
> Problem accessing /. Reason:
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): 
> Type mismatch. BOOLEAN for null
> Powered by Jetty://
> 
> 
> {noformat}
> _org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. BOOLEAN for null_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)