[jira] [Commented] (PHOENIX-6603) Create SYSTEM.TRANSFORM table

2021-12-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456110#comment-17456110
 ] 

ASF GitHub Bot commented on PHOENIX-6603:
-

gokceni opened a new pull request #1363:
URL: https://github.com/apache/phoenix/pull/1363


   Co-authored-by: Gokcen Iskender 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Create SYSTEM.TRANSFORM table
> -
>
> Key: PHOENIX-6603
> URL: https://issues.apache.org/jira/browse/PHOENIX-6603
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Priority: Major
>
> SYSTEM.TRANSFORM is a table for bookkeeping the transform process



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] gokceni opened a new pull request #1363: PHOENIX-6603: Add SYSTEM.TRANSFORM table (#656)

2021-12-08 Thread GitBox


gokceni opened a new pull request #1363:
URL: https://github.com/apache/phoenix/pull/1363


   Co-authored-by: Gokcen Iskender 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6603) Create SYSTEM.TRANSFORM table

2021-12-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456109#comment-17456109
 ] 

ASF GitHub Bot commented on PHOENIX-6603:
-

gokceni commented on pull request #1360:
URL: https://github.com/apache/phoenix/pull/1360#issuecomment-989456462


   @gjacoby126 @virajjasani 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Create SYSTEM.TRANSFORM table
> -
>
> Key: PHOENIX-6603
> URL: https://issues.apache.org/jira/browse/PHOENIX-6603
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Priority: Major
>
> SYSTEM.TRANSFORM is a table for bookkeeping the transform process



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] gokceni commented on pull request #1360: PHOENIX-6603: Add SYSTEM.TRANSFORM table (#656)

2021-12-08 Thread GitBox


gokceni commented on pull request #1360:
URL: https://github.com/apache/phoenix/pull/1360#issuecomment-989456462


   @gjacoby126 @virajjasani 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6603) Create SYSTEM.TRANSFORM table

2021-12-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456108#comment-17456108
 ] 

ASF GitHub Bot commented on PHOENIX-6603:
-

gokceni closed pull request #1359:
URL: https://github.com/apache/phoenix/pull/1359


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Create SYSTEM.TRANSFORM table
> -
>
> Key: PHOENIX-6603
> URL: https://issues.apache.org/jira/browse/PHOENIX-6603
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Priority: Major
>
> SYSTEM.TRANSFORM is a table for bookkeeping the transform process



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6603) Create SYSTEM.TRANSFORM table

2021-12-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456107#comment-17456107
 ] 

ASF GitHub Bot commented on PHOENIX-6603:
-

gokceni commented on pull request #1359:
URL: https://github.com/apache/phoenix/pull/1359#issuecomment-989456235


   @gjacoby126 I am not sure what happened but there is another PR with this 
opened. I am closing this one and moving onto the other one. @virajjasani I 
will tag you there


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Create SYSTEM.TRANSFORM table
> -
>
> Key: PHOENIX-6603
> URL: https://issues.apache.org/jira/browse/PHOENIX-6603
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Priority: Major
>
> SYSTEM.TRANSFORM is a table for bookkeeping the transform process



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] gokceni closed pull request #1359: PHOENIX-6603: Add SYSTEM.TRANSFORM table (#656)

2021-12-08 Thread GitBox


gokceni closed pull request #1359:
URL: https://github.com/apache/phoenix/pull/1359


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [phoenix] gokceni commented on pull request #1359: PHOENIX-6603: Add SYSTEM.TRANSFORM table (#656)

2021-12-08 Thread GitBox


gokceni commented on pull request #1359:
URL: https://github.com/apache/phoenix/pull/1359#issuecomment-989456235


   @gjacoby126 I am not sure what happened but there is another PR with this 
opened. I am closing this one and moving onto the other one. @virajjasani I 
will tag you there


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] (PHOENIX-6604) Allow using indexes for wildcard topN queries on salted tables

2021-12-08 Thread Lars Hofhansl (Jira)


[ https://issues.apache.org/jira/browse/PHOENIX-6604 ]


Lars Hofhansl deleted comment on PHOENIX-6604:


was (Author: githubbot):
lhofhansl edited a comment on pull request #1362:
URL: https://github.com/apache/phoenix/pull/1362#issuecomment-989012434


   Sorry, I've been away for a while, What do I need to do to trigger the test 
run and link this PR to the Jira issue?
   
   Edit: looks like I forgot the - the jira number.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Allow using indexes for wildcard topN queries on salted tables
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
> Attachments: 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;
> CREATE LOCAL INDEX l_shipdate ON lineitem(shipdate);{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  
> The same happens with a covered global index.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6604) Allow using indexes for wildcard topN queries on salted tables

2021-12-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455891#comment-17455891
 ] 

ASF GitHub Bot commented on PHOENIX-6604:
-

lhofhansl edited a comment on pull request #1362:
URL: https://github.com/apache/phoenix/pull/1362#issuecomment-989012434


   Sorry, I've been away for a while, What do I need to do to trigger the test 
run and link this PR to the Jira issue?
   
   Edit: looks like I forgot the - in the jira number.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Allow using indexes for wildcard topN queries on salted tables
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
> Attachments: 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;
> CREATE LOCAL INDEX l_shipdate ON lineitem(shipdate);{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  
> The same happens with a covered global index.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] lhofhansl edited a comment on pull request #1362: PHOENIX-6604 Allow using indexes for wildcard topN queries on salted tables

2021-12-08 Thread GitBox


lhofhansl edited a comment on pull request #1362:
URL: https://github.com/apache/phoenix/pull/1362#issuecomment-989012434


   Sorry, I've been away for a while, What do I need to do to trigger the test 
run and link this PR to the Jira issue?
   
   Edit: looks like I forgot the - in the jira number.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6604) Allow using indexes for wildcard topN queries on salted tables

2021-12-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455890#comment-17455890
 ] 

ASF GitHub Bot commented on PHOENIX-6604:
-

lhofhansl edited a comment on pull request #1362:
URL: https://github.com/apache/phoenix/pull/1362#issuecomment-989012434


   Sorry, I've been away for a while, What do I need to do to trigger the test 
run and link this PR to the Jira issue?
   
   Edit: looks like I forgot the - the jira number.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Allow using indexes for wildcard topN queries on salted tables
> --
>
> Key: PHOENIX-6604
> URL: https://issues.apache.org/jira/browse/PHOENIX-6604
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.3
>
> Attachments: 6604-1.5.1.3, 6604.5.1.3
>
>
> Just randomly came across this, playing with TPCH data.
> {code:java}
> CREATE TABLE lineitem (
>  orderkey bigint not null,
>  partkey bigint,
>  suppkey bigint,
>  linenumber integer not null,
>  quantity double,
>  extendedprice double,
>  discount double,
>  tax double,
>  returnflag varchar(1),
>  linestatus varchar(1),
>  shipdate date,
>  commitdate date,
>  receiptdate date,
>  shipinstruct varchar(25),
>  shipmode varchar(10),
>  comment varchar(44)
>  constraint pk primary key(orderkey, linenumber)) 
> IMMUTABLE_ROWS=true,SALT_BUCKETS=4;
> CREATE LOCAL INDEX l_shipdate ON lineitem(shipdate);{code}
> Now:
> {code:java}
>  > explain select * from lineitem order by shipdate limit 1;
> +---+
> |                                          PLAN                               
>       |
> +---+
> | CLIENT 199-CHUNK 8859938 ROWS 2044738843 BYTES PARALLEL 199-WAY FULL SCAN 
> OVER LI |
> |     SERVER TOP 1 ROW SORTED BY [SHIPDATE]                                   
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT LIMIT 1                                                              
>       |
> +---+
> 4 rows selected (6.525 seconds)
> -- SAME COLUMNS!
> > explain select ORDERKEY, PARTKEY, SUPPKEY, LINENUMBER, QUANTITY, 
> > EXTENDEDPRICE, DISCOUNT, TAX, RETURNFLAG, LINESTATUS, SHIPDATE, COMMITDATE, 
> > RECEIPTDATE, SHIPINSTRUCT, SHIPMODE, COMMENT from lineitem order by 
> > shipdate limit 1;
> +---+
> |                                                                             
>       |
> +---+
> | CLIENT 4-CHUNK 4 ROWS 204 BYTES PARALLEL 4-WAY RANGE SCAN OVER LINEITEM [1] 
>       |
> |     SERVER MERGE [0.PARTKEY, 0.SUPPKEY, 0.QUANTITY, 0.EXTENDEDPRICE, 
> 0.DISCOUNT,  |
> |     SERVER FILTER BY FIRST KEY ONLY                                         
>       |
> |     SERVER 1 ROW LIMIT                                                      
>       |
> | CLIENT MERGE SORT                                                           
>       |
> | CLIENT 1 ROW LIMIT                                                          
>       |
> +---+
> 6 rows selected (2.736 seconds){code}
>  
> The same happens with a covered global index.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] lhofhansl edited a comment on pull request #1362: PHOENIX-6604 Allow using indexes for wildcard topN queries on salted tables

2021-12-08 Thread GitBox


lhofhansl edited a comment on pull request #1362:
URL: https://github.com/apache/phoenix/pull/1362#issuecomment-989012434


   Sorry, I've been away for a while, What do I need to do to trigger the test 
run and link this PR to the Jira issue?
   
   Edit: looks like I forgot the - the jira number.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [phoenix] lhofhansl commented on pull request #1362: PHOENIX 6604 Allow using indexes for wildcard topN queries on salted tables

2021-12-08 Thread GitBox


lhofhansl commented on pull request #1362:
URL: https://github.com/apache/phoenix/pull/1362#issuecomment-989012434


   Sorry, I've been away for a while, What do I need to do to trigger the test 
run and link this PR to the Jira issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6608) DISCUSS: Rethink MapReduce split generation

2021-12-08 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455885#comment-17455885
 ] 

Lars Hofhansl commented on PHOENIX-6608:


What kind of worker? Is that some custom worker with a JDBC client?

An M/R job or Trino job is only planned once, right? So that should not be a 
problem there...?

Hopefully the worker do not need to re-load the stats. That would be another 
bug.

 

> DISCUSS: Rethink MapReduce split generation
> ---
>
> Key: PHOENIX-6608
> URL: https://issues.apache.org/jira/browse/PHOENIX-6608
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Priority: Major
>
> I just ran into an issue with Trino, which uses Phoenix' M/R integration to 
> generate splits for its worker nodes.
> See: [https://github.com/trinodb/trino/issues/10143]
> And a fix: [https://github.com/trinodb/trino/pull/10153]
> In short the issue is that with large data size and guideposts enabled 
> (default) Phoenix' RoundRobinResultIterator starts scanning when tasks are 
> submitted to the queue. For large datasets (per client) this fills the heap 
> with pre-fetches HBase result objects.
> MapReduce (and Spark) integrations have presumably the same issue.
> My proposed solution is instead of allowing Phoenix to do intra-split 
> parallelism we create more splits (the fix above groups 20 scans into a split 
> - 20 turned out to be a good number).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6605) Cancel the shade relocation on javax.servlet

2021-12-08 Thread Cong Luo (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455153#comment-17455153
 ] 

Cong Luo commented on PHOENIX-6605:
---

[~stoty] Yes. It must be a painful question...

> Cancel the shade relocation on javax.servlet
> 
>
> Key: PHOENIX-6605
> URL: https://issues.apache.org/jira/browse/PHOENIX-6605
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
> Environment: # Hadoop 3.2.2
>  # HBase 2.4.2
>  # Phoenix 5.1.2
>Reporter: Cong Luo
>Priority: Minor
>
> In 6.0.0, PQS already do a shade with relocation on the `javax.servlet` 
> package, but use the test framework (extend the 
> org.apache.phoenix.query.BaseTest) got :
> {code:java}
> java.lang.NoSuchMethodError: 
> org.eclipse.jetty.servlet.ServletHolder.(Lorg/apache/phoenix/shaded/javax/servlet/Servlet;)V
>         at 
> org.apache.phoenix.queryserver.server.customizers.JMXJsonEndpointServerCustomizer.customize(JMXJsonEndpointServerCustomizer.java:53)
>         at 
> org.apache.phoenix.queryserver.server.customizers.JMXJsonEndpointServerCustomizer.customize(JMXJsonEndpointServerCustomizer.java:36)
>         at 
> org.apache.calcite.avatica.server.HttpServer.internalStart(HttpServer.java:232)
>         at 
> org.apache.calcite.avatica.server.HttpServer.start(HttpServer.java:203)
>         at 
> org.apache.phoenix.queryserver.server.QueryServer.run(QueryServer.java:265)
>         at 
> org.apache.phoenix.queryserver.server.QueryServer.run(QueryServer.java:469)
>         at java.lang.Thread.run(Thread.java:748) {code}
> It is recommended to cancel the relocation to allow the parent project to 
> overwrite the version of `javax.servlet`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (PHOENIX-6606) Cannot use float array data type with PQS client

2021-12-08 Thread Cong Luo (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455149#comment-17455149
 ] 

Cong Luo edited comment on PHOENIX-6606 at 12/8/21, 10:45 AM:
--

[~stoty] I will re-run the test and update this comments.


was (Author: luoc):
[~stoty] I'll re-run the test and update this comments.

> Cannot use float array data type with PQS client
> 
>
> Key: PHOENIX-6606
> URL: https://issues.apache.org/jira/browse/PHOENIX-6606
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Cong Luo
>Priority: Minor
>
> IDE use the PQS client with the 6.0.0, then write and read the float data 
> array got :
> case 1 (cannot be written) :
> {code:java}
> Array t_float = conn.createArrayOf("FLOAT", new Float[] { Float.MIN_VALUE, 
> Float.MAX_VALUE });
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_float) 
> values(?, ?)");
> pstmt.setArray(2, t_float);{code}
> {code:java}
> Exception in thread "main" java.sql.SQLException: java.lang.Float cannot be 
> cast to java.lang.Double
>     at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
>     at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
>     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>     at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeLargeUpdate(AvaticaPreparedStatement.java:152)
>     at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeUpdate(AvaticaPreparedStatement.java:147)
>  {code}
>  
> case 2 (can be written and cannot be read) :
> {code:java}
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_float) 
> values(?, ARRAY[1.0, 2.0])");{code}
> {code:java}
> Exception in thread "main" org.apache.calcite.avatica.AvaticaSqlException: 
> Error -1 (0) : Remote driver error: ClassCastException: (null exception 
> message)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:54)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:41)     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6606) Cannot use float array data type with PQS client

2021-12-08 Thread Cong Luo (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455149#comment-17455149
 ] 

Cong Luo commented on PHOENIX-6606:
---

[~stoty] I'll re-run the test and update this comments.

> Cannot use float array data type with PQS client
> 
>
> Key: PHOENIX-6606
> URL: https://issues.apache.org/jira/browse/PHOENIX-6606
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Cong Luo
>Priority: Minor
>
> IDE use the PQS client with the 6.0.0, then write and read the float data 
> array got :
> case 1 (cannot be written) :
> {code:java}
> Array t_float = conn.createArrayOf("FLOAT", new Float[] { Float.MIN_VALUE, 
> Float.MAX_VALUE });
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_float) 
> values(?, ?)");
> pstmt.setArray(2, t_float);{code}
> {code:java}
> Exception in thread "main" java.sql.SQLException: java.lang.Float cannot be 
> cast to java.lang.Double
>     at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
>     at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
>     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>     at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeLargeUpdate(AvaticaPreparedStatement.java:152)
>     at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeUpdate(AvaticaPreparedStatement.java:147)
>  {code}
>  
> case 2 (can be written and cannot be read) :
> {code:java}
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_float) 
> values(?, ARRAY[1.0, 2.0])");{code}
> {code:java}
> Exception in thread "main" org.apache.calcite.avatica.AvaticaSqlException: 
> Error -1 (0) : Remote driver error: ClassCastException: (null exception 
> message)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:54)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:41)     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6607) Cannot use PreparedStatement in PQS client to write char array

2021-12-08 Thread Cong Luo (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455146#comment-17455146
 ] 

Cong Luo commented on PHOENIX-6607:
---

[~stoty] Okay. I'll take a look (based on avatica 1.19).

> Cannot use PreparedStatement in PQS client to write char array
> --
>
> Key: PHOENIX-6607
> URL: https://issues.apache.org/jira/browse/PHOENIX-6607
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Cong Luo
>Priority: Minor
>
> Use the PQS Client (6.0.0) with `PreparedStatement` to set the char array 
> value got :
> case 1 (use the pstmt) :
> {code:java}
> Array t_char = conn.createArrayOf("CHAR" , new String[] { "a", "b" });
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_char) 
> values(?, ?)");
> pstmt.setArray(2, t_char);{code}
> {code:java}
> Exception in thread "main" org.apache.calcite.avatica.AvaticaSqlException: 
> Error -1 (0) : Remote driver error: NullPointerException: (null exception 
> message)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:54)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:41)     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeLargeUpdate(AvaticaPreparedStatement.java:152)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeUpdate(AvaticaPreparedStatement.java:147)
>  java.lang.NullPointerException     at 
> org.apache.phoenix.schema.types.PArrayDataType.toBytes(PArrayDataType.java:142)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:192)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:174)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:161)
>      at 
> org.apache.phoenix.compile.UpsertCompiler$UpdateColumnCompiler.visit(UpsertCompiler.java:871)
>      at 
> org.apache.phoenix.compile.UpsertCompiler$UpdateColumnCompiler.visit(UpsertCompiler.java:855)
>      at org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)  
>    at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:744)    
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)    
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)    
>  at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>      at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>      at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>      at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:851)   
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:254)   
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1032)
>      at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1002)
>      at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>      at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>      at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}
>  
> case 2 (use the sql text) :
> {code:java}

[jira] [Commented] (PHOENIX-6606) Cannot use float array data type with PQS client

2021-12-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455098#comment-17455098
 ] 

Istvan Toth commented on PHOENIX-6606:
--

This is possibly two different issues.

1. seems to be an Avatica issue, and I ask you to re-test with PQS HEAD.

2. seems to be coming from Phoenix itself.
Does "upsert into s1.datatype(t_uuid, t_float) values(?, ARRAY[1.0f, 2.0f])" 
work ?

> Cannot use float array data type with PQS client
> 
>
> Key: PHOENIX-6606
> URL: https://issues.apache.org/jira/browse/PHOENIX-6606
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Cong Luo
>Priority: Minor
>
> IDE use the PQS client with the 6.0.0, then write and read the float data 
> array got :
> case 1 (cannot be written) :
> {code:java}
> Array t_float = conn.createArrayOf("FLOAT", new Float[] { Float.MIN_VALUE, 
> Float.MAX_VALUE });
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_float) 
> values(?, ?)");
> pstmt.setArray(2, t_float);{code}
> {code:java}
> Exception in thread "main" java.sql.SQLException: java.lang.Float cannot be 
> cast to java.lang.Double
>     at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
>     at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
>     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>     at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeLargeUpdate(AvaticaPreparedStatement.java:152)
>     at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeUpdate(AvaticaPreparedStatement.java:147)
>  {code}
>  
> case 2 (can be written and cannot be read) :
> {code:java}
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_float) 
> values(?, ARRAY[1.0, 2.0])");{code}
> {code:java}
> Exception in thread "main" org.apache.calcite.avatica.AvaticaSqlException: 
> Error -1 (0) : Remote driver error: ClassCastException: (null exception 
> message)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:54)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:41)     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6607) Cannot use PreparedStatement in PQS client to write char array

2021-12-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455037#comment-17455037
 ] 

Istvan Toth commented on PHOENIX-6607:
--

[~luoc] Could you test the same with PQS HEAD ?  That one has avatica 1.19 , 
and I recall seeing a lot of Array fixes in the changelog.

> Cannot use PreparedStatement in PQS client to write char array
> --
>
> Key: PHOENIX-6607
> URL: https://issues.apache.org/jira/browse/PHOENIX-6607
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Cong Luo
>Priority: Minor
>
> Use the PQS Client (6.0.0) with `PreparedStatement` to set the char array 
> value got :
> case 1 (use the pstmt) :
> {code:java}
> Array t_char = conn.createArrayOf("CHAR" , new String[] { "a", "b" });
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_char) 
> values(?, ?)");
> pstmt.setArray(2, t_char);{code}
> {code:java}
> Exception in thread "main" org.apache.calcite.avatica.AvaticaSqlException: 
> Error -1 (0) : Remote driver error: NullPointerException: (null exception 
> message)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:54)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:41)     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeLargeUpdate(AvaticaPreparedStatement.java:152)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeUpdate(AvaticaPreparedStatement.java:147)
>  java.lang.NullPointerException     at 
> org.apache.phoenix.schema.types.PArrayDataType.toBytes(PArrayDataType.java:142)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:192)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:174)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:161)
>      at 
> org.apache.phoenix.compile.UpsertCompiler$UpdateColumnCompiler.visit(UpsertCompiler.java:871)
>      at 
> org.apache.phoenix.compile.UpsertCompiler$UpdateColumnCompiler.visit(UpsertCompiler.java:855)
>      at org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)  
>    at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:744)    
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)    
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)    
>  at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>      at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>      at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>      at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:851)   
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:254)   
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1032)
>      at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1002)
>      at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>      at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>      at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>      at 
> 

[jira] [Commented] (PHOENIX-6607) Cannot use PreparedStatement in PQS client to write char array

2021-12-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455035#comment-17455035
 ] 

Istvan Toth commented on PHOENIX-6607:
--

The next step is to create a test case for this in Avatica  - and preferably a 
fix.

> Cannot use PreparedStatement in PQS client to write char array
> --
>
> Key: PHOENIX-6607
> URL: https://issues.apache.org/jira/browse/PHOENIX-6607
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Cong Luo
>Priority: Minor
>
> Use the PQS Client (6.0.0) with `PreparedStatement` to set the char array 
> value got :
> case 1 (use the pstmt) :
> {code:java}
> Array t_char = conn.createArrayOf("CHAR" , new String[] { "a", "b" });
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_char) 
> values(?, ?)");
> pstmt.setArray(2, t_char);{code}
> {code:java}
> Exception in thread "main" org.apache.calcite.avatica.AvaticaSqlException: 
> Error -1 (0) : Remote driver error: NullPointerException: (null exception 
> message)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:54)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:41)     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeLargeUpdate(AvaticaPreparedStatement.java:152)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeUpdate(AvaticaPreparedStatement.java:147)
>  java.lang.NullPointerException     at 
> org.apache.phoenix.schema.types.PArrayDataType.toBytes(PArrayDataType.java:142)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:192)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:174)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:161)
>      at 
> org.apache.phoenix.compile.UpsertCompiler$UpdateColumnCompiler.visit(UpsertCompiler.java:871)
>      at 
> org.apache.phoenix.compile.UpsertCompiler$UpdateColumnCompiler.visit(UpsertCompiler.java:855)
>      at org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)  
>    at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:744)    
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)    
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)    
>  at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>      at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>      at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>      at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:851)   
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:254)   
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1032)
>      at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1002)
>      at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>      at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>      at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}
>  
> case 2 

[jira] [Commented] (PHOENIX-6607) Cannot use PreparedStatement in PQS client to write char array

2021-12-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455032#comment-17455032
 ] 

Istvan Toth commented on PHOENIX-6607:
--

This looks like an Avatica bug, not something that we can fix in PQS.


> Cannot use PreparedStatement in PQS client to write char array
> --
>
> Key: PHOENIX-6607
> URL: https://issues.apache.org/jira/browse/PHOENIX-6607
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Cong Luo
>Priority: Minor
>
> Use the PQS Client (6.0.0) with `PreparedStatement` to set the char array 
> value got :
> case 1 (use the pstmt) :
> {code:java}
> Array t_char = conn.createArrayOf("CHAR" , new String[] { "a", "b" });
> pstmt = conn.prepareStatement("upsert into s1.datatype(t_uuid, t_char) 
> values(?, ?)");
> pstmt.setArray(2, t_char);{code}
> {code:java}
> Exception in thread "main" org.apache.calcite.avatica.AvaticaSqlException: 
> Error -1 (0) : Remote driver error: NullPointerException: (null exception 
> message)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:54)     at 
> org.apache.calcite.avatica.Helper.createException(Helper.java:41)     at 
> org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeLargeUpdate(AvaticaPreparedStatement.java:152)
>      at 
> org.apache.calcite.avatica.AvaticaPreparedStatement.executeUpdate(AvaticaPreparedStatement.java:147)
>  java.lang.NullPointerException     at 
> org.apache.phoenix.schema.types.PArrayDataType.toBytes(PArrayDataType.java:142)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:192)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:174)
>      at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:161)
>      at 
> org.apache.phoenix.compile.UpsertCompiler$UpdateColumnCompiler.visit(UpsertCompiler.java:871)
>      at 
> org.apache.phoenix.compile.UpsertCompiler$UpdateColumnCompiler.visit(UpsertCompiler.java:855)
>      at org.apache.phoenix.parse.BindParseNode.accept(BindParseNode.java:47)  
>    at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:744)    
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)    
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)    
>  at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)     at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
>      at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>      at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>      at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>      at org.apache.calcite.avatica.jdbc.JdbcMeta.execute(JdbcMeta.java:851)   
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:254)   
>   at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1032)
>      at 
> org.apache.calcite.avatica.remote.Service$ExecuteRequest.accept(Service.java:1002)
>      at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>      at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>      at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>      at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}
>  
> case 2 (use the sql 

[jira] [Commented] (PHOENIX-6605) Cancel the shade relocation on javax.servlet

2021-12-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455026#comment-17455026
 ] 

Istvan Toth commented on PHOENIX-6605:
--

Generally, PQS is a standalone application, so relocating stuff is done to 
avoid conflicts with the phoenix-client JAR.

In the latest Phoenix releases, we DO shade javax.servlet in phoenix-client, so 
what you propose would probably fix the tests, and not break Phoenix.

However, PQS is also supposed to work with older phoenix client versions, where 
 javax.servlet may not be shaded.

I am not sure if this is the right way to fix the tests (but I don't have a 
better a solution either ATM)

> Cancel the shade relocation on javax.servlet
> 
>
> Key: PHOENIX-6605
> URL: https://issues.apache.org/jira/browse/PHOENIX-6605
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
> Environment: # Hadoop 3.2.2
>  # HBase 2.4.2
>  # Phoenix 5.1.2
>Reporter: Cong Luo
>Priority: Minor
>
> In 6.0.0, PQS already do a shade with relocation on the `javax.servlet` 
> package, but use the test framework (extend the 
> org.apache.phoenix.query.BaseTest) got :
> {code:java}
> java.lang.NoSuchMethodError: 
> org.eclipse.jetty.servlet.ServletHolder.(Lorg/apache/phoenix/shaded/javax/servlet/Servlet;)V
>         at 
> org.apache.phoenix.queryserver.server.customizers.JMXJsonEndpointServerCustomizer.customize(JMXJsonEndpointServerCustomizer.java:53)
>         at 
> org.apache.phoenix.queryserver.server.customizers.JMXJsonEndpointServerCustomizer.customize(JMXJsonEndpointServerCustomizer.java:36)
>         at 
> org.apache.calcite.avatica.server.HttpServer.internalStart(HttpServer.java:232)
>         at 
> org.apache.calcite.avatica.server.HttpServer.start(HttpServer.java:203)
>         at 
> org.apache.phoenix.queryserver.server.QueryServer.run(QueryServer.java:265)
>         at 
> org.apache.phoenix.queryserver.server.QueryServer.run(QueryServer.java:469)
>         at java.lang.Thread.run(Thread.java:748) {code}
> It is recommended to cancel the relocation to allow the parent project to 
> overwrite the version of `javax.servlet`.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)