[jira] [Commented] (CALCITE-3277) calcite-avatica-go: panic: proto: can't skip unknown wire type 4

2019-08-21 Thread Francis Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912733#comment-16912733
 ] 

Francis Chuang commented on CALCITE-3277:
-

Which version of calcite-avatica-go are you using? Can you try using master?

> calcite-avatica-go: panic: proto: can't skip unknown wire type 4
> 
>
> Key: CALCITE-3277
> URL: https://issues.apache.org/jira/browse/CALCITE-3277
> Project: Calcite
>  Issue Type: Bug
>  Components: avatica-go
>Reporter: Shurmin Evgeniy
>Assignee: Francis Chuang
>Priority: Critical
> Fix For: avatica-go-5.0.0
>
>
> I can't perform simple query to druid using 
> {{github.com/apache/calcite-avatica-go. }}
> Code:
> {code:java}
> package main
> import (
>   "database/sql"
>   "fmt"
>   _ "github.com/apache/calcite-avatica-go/v4"
> )
> func main() {
>   db, err := sql.Open("avatica", 
> "http://:/druid/v2/sql/avatica/;)
>   if err != nil { panic(err) }
>   rows, err := db.Query(`SELECT * FROM sys.servers`)
>   if err != nil { panic(err) }
>   defer func() {
>   if err := rows.Close(); err != nil { panic(err) }
>   }()
>   for rows.Next() {
>   var server, host float64
>   err = rows.Scan(, )
>   if err != nil { panic(err) }
>   fmt.Printf("server: %v, host: %v\n", server, host)
>   }
> }
> {code}
> Console:
> {{panic: proto: can't skip unknown wire type 4}}
>  {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
>  {{Process finished with exit code 2}}
> Golang:
> {{go version go1.12.7 darwin/amd64}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3277) calcite-avatica-go: panic: proto: can't skip unknown wire type 4

2019-08-21 Thread Francis Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Chuang updated CALCITE-3277:

Fix Version/s: (was: avatica-go-4.0.0)
   avatica-go-5.0.0

> calcite-avatica-go: panic: proto: can't skip unknown wire type 4
> 
>
> Key: CALCITE-3277
> URL: https://issues.apache.org/jira/browse/CALCITE-3277
> Project: Calcite
>  Issue Type: Bug
>  Components: avatica-go
>Reporter: Shurmin Evgeniy
>Assignee: Francis Chuang
>Priority: Critical
> Fix For: avatica-go-5.0.0
>
>
> I can't perform simple query to druid using 
> {{github.com/apache/calcite-avatica-go. }}
> Code:
> {code:java}
> package main
> import (
>   "database/sql"
>   "fmt"
>   _ "github.com/apache/calcite-avatica-go/v4"
> )
> func main() {
>   db, err := sql.Open("avatica", 
> "http://:/druid/v2/sql/avatica/;)
>   if err != nil { panic(err) }
>   rows, err := db.Query(`SELECT * FROM sys.servers`)
>   if err != nil { panic(err) }
>   defer func() {
>   if err := rows.Close(); err != nil { panic(err) }
>   }()
>   for rows.Next() {
>   var server, host float64
>   err = rows.Scan(, )
>   if err != nil { panic(err) }
>   fmt.Printf("server: %v, host: %v\n", server, host)
>   }
> }
> {code}
> Console:
> {{panic: proto: can't skip unknown wire type 4}}
>  {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
>  {{Process finished with exit code 2}}
> Golang:
> {{go version go1.12.7 darwin/amd64}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3280) Cannot parse query REGEXP_REPLACE in Redshift

2019-08-21 Thread Ryan Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Fu updated CALCITE-3280:
-
Description: 
REGEXP_REPLACE error:
{code:java}
No match found for function signature REGEXP_REPLACE(, , 
){code}
 

Example query:
{code:java}
SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM public.account 
{code}

  was:
REGEXP_REPLACE error:
{code:java}
No match found for function signature REGEXP_REPLACE(, , 
){code}
 

Example query:

 
{code:java}
SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM public.account 
{code}


> Cannot parse query REGEXP_REPLACE in Redshift
> -
>
> Key: CALCITE-3280
> URL: https://issues.apache.org/jira/browse/CALCITE-3280
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Ryan Fu
>Priority: Minor
>
> REGEXP_REPLACE error:
> {code:java}
> No match found for function signature REGEXP_REPLACE(, 
> , ){code}
>  
> Example query:
> {code:java}
> SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
> '([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM 
> public.account {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3280) Cannot parse query REGEXP_REPLACE in Redshift

2019-08-21 Thread Ryan Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Fu updated CALCITE-3280:
-
Description: 
REGEXP_REPLACE error:
{code:java}
No match found for function signature REGEXP_REPLACE(, , 
){code}
 

Example query:

 
{code:java}
SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM public.account 
{code}

  was:
REGEXP_REPLACE error:

No match found for function signature REGEXP_REPLACE(, , 
)

 

Example query:

 
{code:java}
SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM public.account 
{code}


> Cannot parse query REGEXP_REPLACE in Redshift
> -
>
> Key: CALCITE-3280
> URL: https://issues.apache.org/jira/browse/CALCITE-3280
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Ryan Fu
>Priority: Minor
>
> REGEXP_REPLACE error:
> {code:java}
> No match found for function signature REGEXP_REPLACE(, 
> , ){code}
>  
> Example query:
>  
> {code:java}
> SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
> '([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM 
> public.account {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3279) java.lang.ExceptionInInitializerError

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CALCITE-3279:

Labels: pull-request-available  (was: )

> java.lang.ExceptionInInitializerError
> -
>
> Key: CALCITE-3279
> URL: https://issues.apache.org/jira/browse/CALCITE-3279
> Project: Calcite
>  Issue Type: Bug
>Reporter: xzh_dz
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2019-08-21-23-17-02-049.png, 
> image-2019-08-22-00-01-34-306.png, image-2019-08-22-00-02-56-686.png
>
>
> When i run the SparkRules main method.And i get the exception as below.
> !image-2019-08-21-23-17-02-049.png|width=653,height=154!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3122) Convert Pig Latin scripts into Calcite logical plan

2019-08-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912770#comment-16912770
 ] 

Julian Hyde commented on CALCITE-3122:
--

My point is that if we can use a VolcanoPlanner rather than PigRelPlanner then 
we can just use the VolcanoPlanner that is created with the cluster. We don't 
need to call setPlanner at all. And I don't think that PigRelPlanner adds very 
much, so what would it take to get rid of it?

> Convert Pig Latin scripts into Calcite logical plan 
> 
>
> Key: CALCITE-3122
> URL: https://issues.apache.org/jira/browse/CALCITE-3122
> Project: Calcite
>  Issue Type: New Feature
>  Components: core, piglet
>Reporter: Khai Tran
>Assignee: Julian Hyde
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We create an internal Calcite repo at LinkedIn and develop APIs to parse any 
> Pig Latin scripts into Calcite logical plan. The code was tested in nearly 
> ~1000 Pig scripts written at LinkedIn.
> Changes:
> 1. piglet: main conversion code live there, include:
>  * APIs to convert any Pig scripts into RelNode plans or SQL statements
>  * Use Pig Grunt parser to parse Pig Latin scripts into Pig logical plan 
> (DAGs)
>  * Convert Pig schemas into RelDatatype
>  * Traverse through Pig expression plan and convert Pig expressions into 
> RexNodes
>  * Map some basic Pig UDFs to Calcite SQL operators
>  * Build Calcite UDFs for any other Pig UDFs, including UDFs written in both 
> Java and Python
>  * Traverse (DFS) through Pig logical plans to convert each Pig logical nodes 
> to RelNodes
>  * Have an optimizer rule to optimize Pig group/cogroup into Aggregate 
> operators
> 2. core:
>  * Implement other RelNode in Rel2Sql so that Pig can be translated into SQL
>  * Other minor changes in a few other classes to make Pig to Calcite works



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3122) Convert Pig Latin scripts into Calcite logical plan

2019-08-21 Thread Khai Tran (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912767#comment-16912767
 ] 

Khai Tran commented on CALCITE-3122:


[~julianhyde] Actually I just found another way to resolve it by clearing 
current rules and set new rules to the existing planners. The getCost override 
is not needed to pass all the tests (maybe I need it for other Pig script in 
LinkedIn), but let's remove the PigRelPlanner class for now until we need it 
again. Code is much cleaner now.

See 
[https://github.com/apache/calcite/pull/1265/commits/a51268e0b44f1ec875cc323d555ef66438d5adbe]

> Convert Pig Latin scripts into Calcite logical plan 
> 
>
> Key: CALCITE-3122
> URL: https://issues.apache.org/jira/browse/CALCITE-3122
> Project: Calcite
>  Issue Type: New Feature
>  Components: core, piglet
>Reporter: Khai Tran
>Assignee: Julian Hyde
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We create an internal Calcite repo at LinkedIn and develop APIs to parse any 
> Pig Latin scripts into Calcite logical plan. The code was tested in nearly 
> ~1000 Pig scripts written at LinkedIn.
> Changes:
> 1. piglet: main conversion code live there, include:
>  * APIs to convert any Pig scripts into RelNode plans or SQL statements
>  * Use Pig Grunt parser to parse Pig Latin scripts into Pig logical plan 
> (DAGs)
>  * Convert Pig schemas into RelDatatype
>  * Traverse through Pig expression plan and convert Pig expressions into 
> RexNodes
>  * Map some basic Pig UDFs to Calcite SQL operators
>  * Build Calcite UDFs for any other Pig UDFs, including UDFs written in both 
> Java and Python
>  * Traverse (DFS) through Pig logical plans to convert each Pig logical nodes 
> to RelNodes
>  * Have an optimizer rule to optimize Pig group/cogroup into Aggregate 
> operators
> 2. core:
>  * Implement other RelNode in Rel2Sql so that Pig can be translated into SQL
>  * Other minor changes in a few other classes to make Pig to Calcite works



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3122) Convert Pig Latin scripts into Calcite logical plan

2019-08-21 Thread Khai Tran (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912781#comment-16912781
 ] 

Khai Tran commented on CALCITE-3122:


Two reasons I create PigRelPlanner: for use a custom set of rules and to 
override getCost method to set preference when apply some rules. Second reason 
is not needed to pass all local tests (I might need it sometime for other Pig 
scripts at LinkedIn, but I forget those cases now) while we can use some tricks 
for the first reason. So now for passing all local tests, PigRelPlanner is not 
needed.

There is only one item left, adding unit tests for ToLogicalConverter, which I 
will complete tomorrow morning. Hope that you can go over all changes again 
since last point to see if code looks good for merging. I'm excited to see this 
in Calcite 21.

> Convert Pig Latin scripts into Calcite logical plan 
> 
>
> Key: CALCITE-3122
> URL: https://issues.apache.org/jira/browse/CALCITE-3122
> Project: Calcite
>  Issue Type: New Feature
>  Components: core, piglet
>Reporter: Khai Tran
>Assignee: Julian Hyde
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We create an internal Calcite repo at LinkedIn and develop APIs to parse any 
> Pig Latin scripts into Calcite logical plan. The code was tested in nearly 
> ~1000 Pig scripts written at LinkedIn.
> Changes:
> 1. piglet: main conversion code live there, include:
>  * APIs to convert any Pig scripts into RelNode plans or SQL statements
>  * Use Pig Grunt parser to parse Pig Latin scripts into Pig logical plan 
> (DAGs)
>  * Convert Pig schemas into RelDatatype
>  * Traverse through Pig expression plan and convert Pig expressions into 
> RexNodes
>  * Map some basic Pig UDFs to Calcite SQL operators
>  * Build Calcite UDFs for any other Pig UDFs, including UDFs written in both 
> Java and Python
>  * Traverse (DFS) through Pig logical plans to convert each Pig logical nodes 
> to RelNodes
>  * Have an optimizer rule to optimize Pig group/cogroup into Aggregate 
> operators
> 2. core:
>  * Implement other RelNode in Rel2Sql so that Pig can be translated into SQL
>  * Other minor changes in a few other classes to make Pig to Calcite works



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2302) Implicit type cast support

2019-08-21 Thread Danny Chan (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912840#comment-16912840
 ] 

Danny Chan commented on CALCITE-2302:
-

[~julianhyde] Does my reply answer your questions ? I'm planning to merge this 
PR if there are no more comments in 24 hours.

> Implicit type cast support
> --
>
> Key: CALCITE-2302
> URL: https://issues.apache.org/jira/browse/CALCITE-2302
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.17.0
>Reporter: Danny Chan
>Assignee: Danny Chan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Now many DBs have support implicit type cast, eg: SqlServer, Oracle, Hive.
> Implicit type cast is an useful function for many cases, So we should support 
> this.
> I checkout Calcite code and found that:
>  # Now we use a validator to validate our operands types[ through kinds of 
> namespaces and scopes ]
>  # Most of the validations will finally goes to
> {code:java}
> SqlOperator.validateOperands
> {code}
>  # which will use validation logic defined in corresponding 
> SqlOperandTypeChecker
> What i'm confused about is where should i put the implicit type cast logic 
> in? I figured out 2 ways:
>  # Supply a tool class/rules to add casts into a parsed SqlNode tree which 
> will then go through the validation logic later on.
>  # Unleash the validation logic in kinds of SqlOperandTypeChecker, then 
> modify the RelNode/RexNodes tree converted from a validated SqlNode tree to 
> add in casts through custom RelOptRules.
> So guys, which of the 2 ways should i go, or if there are better way to do 
> this?
> I need your help.
>  
> Updated 18-05-30:
> Hi guys, i have made a PR in 
> [CALCITE-2302|https://github.com/apache/calcite/pull/706]
> This is design doc: [Calcite Implicit Type Cast 
> Design|https://docs.google.com/document/d/1g2RUnLXyp_LjUlO-wbblKuP5hqEu3a_2Mt2k4dh6RwU/edit?usp=sharing].
> This is the conversion types mapping: [Conversion Types 
> Mapping|https://docs.google.com/spreadsheets/d/1GhleX5h5W8-kJKh7NMJ4vtoE78pwfaZRJl88ULX_MgU/edit?usp=sharing].
> I really appreciate your suggestions, thx.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2973) Allow theta joins that have equi conditions to be executed using a hash join algorithm

2019-08-21 Thread Lai Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912868#comment-16912868
 ] 

Lai Zhou commented on CALCITE-2973:
---

[~rubenql], [~hyuan] ,[~julianhyde], [~danny0405], the pr is ready ,would 
someone help to review it ?

> Allow theta joins that have equi conditions to be executed using a hash join 
> algorithm
> --
>
> Key: CALCITE-2973
> URL: https://issues.apache.org/jira/browse/CALCITE-2973
> Project: Calcite
>  Issue Type: New Feature
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Lai Zhou
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Now the EnumerableMergeJoinRule only supports an inner and equi join.
> If users make a theta-join query  for a large dataset (such as 1*1), 
> the nested-loop join process will take dozens of time than the sort-merge 
> join process .
> So if we can apply merge-join or hash-join rule for a theta join, it will 
> improve the performance greatly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3252) Add CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions to Redshift operator library

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3252:
-
Description: 
Syntax error while parsing CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions on 
Redshift. Example queries that error for these functions:
{code:java}
SELECT CONVERT_TIMEZONE('UTC', 'America/Los_Angeles', CAST('2019-01-01 
01:00:00' AS TIMESTAMP) FROM mytable {code}
{code:java}
SELECT TO_DATE('2019-01-01', '-MM-DD') FROM mytable {code}
{code:java}
SELECT TO_TIMESTAMP('2019-01-01 01:00:00', '-MM-DD HH:MM:SS') FROM mytable 
{code}
With errors like:
{code:java}
No match found for function signature CONVERT_TIMEZONE(, 
, )
{code}
These are valid in Redshift and Postgres, except for CONVERT_TIMEZONE, which I 
believe is only valid on Redshift.

 

  was:
Example queries that error for these functions:
{code:java}
SELECT CONVERT_TIMEZONE('UTC', 'America/Los_Angeles', CAST('2019-01-01 
01:00:00' AS TIMESTAMP) FROM mytable {code}
{code:java}
SELECT TO_DATE('2019-01-01', '-MM-DD') FROM mytable {code}
{code:java}
SELECT TO_TIMESTAMP('2019-01-01 01:00:00', '-MM-DD HH:MM:SS') FROM mytable 
{code}
With errors like:
{code:java}
No match found for function signature CONVERT_TIMEZONE(, 
, )
{code}
These are valid in Redshift and Postgres, except for CONVERT_TIMEZONE, which I 
believe is only valid on Redshift.

 


> Add CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions to Redshift operator 
> library
> --
>
> Key: CALCITE-3252
> URL: https://issues.apache.org/jira/browse/CALCITE-3252
> Project: Calcite
>  Issue Type: Bug
>Reporter: Lindsey Meyer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Syntax error while parsing CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions 
> on Redshift. Example queries that error for these functions:
> {code:java}
> SELECT CONVERT_TIMEZONE('UTC', 'America/Los_Angeles', CAST('2019-01-01 
> 01:00:00' AS TIMESTAMP) FROM mytable {code}
> {code:java}
> SELECT TO_DATE('2019-01-01', '-MM-DD') FROM mytable {code}
> {code:java}
> SELECT TO_TIMESTAMP('2019-01-01 01:00:00', '-MM-DD HH:MM:SS') FROM 
> mytable 
> {code}
> With errors like:
> {code:java}
> No match found for function signature CONVERT_TIMEZONE(, 
> , )
> {code}
> These are valid in Redshift and Postgres, except for CONVERT_TIMEZONE, which 
> I believe is only valid on Redshift.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3252) Add CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions to Redshift operator library

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3252:
-
Summary: Add CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions to Redshift 
operator library  (was: Syntax error while parsing CONVERT_TIMEZONE, TO_DATE, 
TO_TIMESTAMP functions on Redshift)

> Add CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions to Redshift operator 
> library
> --
>
> Key: CALCITE-3252
> URL: https://issues.apache.org/jira/browse/CALCITE-3252
> Project: Calcite
>  Issue Type: Bug
>Reporter: Lindsey Meyer
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Example queries that error for these functions:
> {code:java}
> SELECT CONVERT_TIMEZONE('UTC', 'America/Los_Angeles', CAST('2019-01-01 
> 01:00:00' AS TIMESTAMP) FROM mytable {code}
> {code:java}
> SELECT TO_DATE('2019-01-01', '-MM-DD') FROM mytable {code}
> {code:java}
> SELECT TO_TIMESTAMP('2019-01-01 01:00:00', '-MM-DD HH:MM:SS') FROM 
> mytable 
> {code}
> With errors like:
> {code:java}
> No match found for function signature CONVERT_TIMEZONE(, 
> , )
> {code}
> These are valid in Redshift and Postgres, except for CONVERT_TIMEZONE, which 
> I believe is only valid on Redshift.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2979) Add a batch-based nested loop join algorithm

2019-08-21 Thread Andrei Sereda (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912716#comment-16912716
 ] 

Andrei Sereda commented on CALCITE-2979:


This is very useful feature. Thank you, Khawla.

+1 to get released in 1.21.0 

We'll try it out internally. 

> Add a batch-based nested loop join algorithm
> 
>
> Key: CALCITE-2979
> URL: https://issues.apache.org/jira/browse/CALCITE-2979
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Stamatis Zampetakis
>Assignee: Khawla Mouhoubi
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Currently, Calcite provides a tuple-based nested loop join algorithm 
> implemented through EnumerableCorrelate and EnumerableDefaults.correlateJoin. 
> This means that for each tuple of the outer relation we probe (set variables) 
> in the inner relation.
> The goal of this issue is to add new algorithm (or extend the correlateJoin 
> method) which first gathers blocks (batches) of tuples from the outer 
> relation and then probes the inner relation once per block.
> There are cases (eg., indexes) where the inner relation can be accessed by 
> more than one value which can greatly improve the performance in particular 
> when the outer relation is big.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3122) Convert Pig Latin scripts into Calcite logical plan

2019-08-21 Thread Khai Tran (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912743#comment-16912743
 ] 

Khai Tran commented on CALCITE-3122:


[~julianhyde] Your commit may not solve the problem. Issue is from this:

if we set `planner = 
PigRelPlanner.createPlanner(builder.getCluster().getPlanner(), rules)`, then 
all the tests would fail with this exception:

`java.lang.AssertionError: Relational expression LogicalUnion#106 belongs to a 
different planner than is currently being used.java.lang.AssertionError: 
Relational expression LogicalUnion#106 belongs to a different planner than is 
currently being used.
 at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1633)`

I dont know why we need that check, but that's the root of all issues:
`

if (rel.getCluster().getPlanner() != this) {
 throw new AssertionError("Relational expression " + rel
 + " belongs to a different planner than is currently being used.");
}`

> Convert Pig Latin scripts into Calcite logical plan 
> 
>
> Key: CALCITE-3122
> URL: https://issues.apache.org/jira/browse/CALCITE-3122
> Project: Calcite
>  Issue Type: New Feature
>  Components: core, piglet
>Reporter: Khai Tran
>Assignee: Julian Hyde
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We create an internal Calcite repo at LinkedIn and develop APIs to parse any 
> Pig Latin scripts into Calcite logical plan. The code was tested in nearly 
> ~1000 Pig scripts written at LinkedIn.
> Changes:
> 1. piglet: main conversion code live there, include:
>  * APIs to convert any Pig scripts into RelNode plans or SQL statements
>  * Use Pig Grunt parser to parse Pig Latin scripts into Pig logical plan 
> (DAGs)
>  * Convert Pig schemas into RelDatatype
>  * Traverse through Pig expression plan and convert Pig expressions into 
> RexNodes
>  * Map some basic Pig UDFs to Calcite SQL operators
>  * Build Calcite UDFs for any other Pig UDFs, including UDFs written in both 
> Java and Python
>  * Traverse (DFS) through Pig logical plans to convert each Pig logical nodes 
> to RelNodes
>  * Have an optimizer rule to optimize Pig group/cogroup into Aggregate 
> operators
> 2. core:
>  * Implement other RelNode in Rel2Sql so that Pig can be translated into SQL
>  * Other minor changes in a few other classes to make Pig to Calcite works



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3272) TUMBLE Table Value Function

2019-08-21 Thread ShuMing Li (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912864#comment-16912864
 ] 

ShuMing Li commented on CALCITE-3272:
-

There is something wrong with the `beam sql` link?

> TUMBLE Table Value Function
> ---
>
> Key: CALCITE-3272
> URL: https://issues.apache.org/jira/browse/CALCITE-3272
> Project: Calcite
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Priority: Major
>
> Define a builtin TVF: Tumble (data , timecol , dur, [ offset ])
> The return value of Tumble is a relation that includes all columns of data as 
> well as additional event time columns wstart and wend.
> Examples of TUMBLE TVF are (from https://s.apache.org/streaming-beam-sql):
> 8:21> SELECT * FROM Bid;
> --
> | bidtime | price | item |
> --
> | 8:07| $2| A|
> | 8:11| $3| B|
> | 8:05| $4| C|
> | 8:09| $5| D|
> | 8:13| $1| E|
> | 8:17| $6| F|
> --
> 8:21> SELECT *
>   FROM Tumble (
> data=> TABLE Bid ,
> timecol => DESCRIPTOR ( bidtime ) ,
> dur => INTERVAL '10' MINUTES ,
> offset  => INTERVAL '0' MINUTES );
> --
> | wstart | wend | bidtime | price | item |
> --
> | 8:00   | 8:10 | 8:07| $2| A|
> | 8:10   | 8:20 | 8:11| $3| B|
> | 8:00   | 8:10 | 8:05| $4| C|
> | 8:00   | 8:10 | 8:09| $5| D|
> | 8:10   | 8:20 | 8:13| $1| E|
> | 8:10   | 8:20 | 8:17| $6| F|
> --
> 8:21> SELECT MAX ( wstart ) , wend , SUM ( price )
>   FROM Tumble (
> data=> TABLE ( Bid ) ,
> timecol => DESCRIPTOR ( bidtime ) ,
> dur => INTERVAL '10 ' MINUTES )
>   GROUP BY wend;
> -
> | wstart | wend | price |
> -
> | 8:00   | 8:10 | $11   |
> | 8:10   | 8:20 | $10   |
> -



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3278) Simplify the use to translate RexNode to Expression for evaluating

2019-08-21 Thread Wang Yanlin (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912870#comment-16912870
 ] 

Wang Yanlin commented on CALCITE-3278:
--

[~rubenql], thanks for reminding.
 This one is a small improvement to simplify the use for translating, add new 
function.
 I checked the CALCITE-3224, it is a refactor for the implementation.
 Currently, it will not interfere or supersede this one. But there may exist 
some conflict in the code. 
 I will close this one if CALCITE-3224 changes and supersedes this one.

> Simplify the use to translate RexNode to Expression for evaluating
> --
>
> Key: CALCITE-3278
> URL: https://issues.apache.org/jira/browse/CALCITE-3278
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Reporter: Wang Yanlin
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  The method *forAggregation* of *RexToLixTranslator*  is designed to work for 
> translating aggregate functions, with some parameters that we actually do not 
> need, if we just want to translate a single RexNode. 
> We lack a more common sense function to get an instance of 
> RexToLixTranslator. 
> And, the translated expression is a *ParameterExpression*, not fit for 
> evaluating. When evaluating, we get an exception like this
> {code:java}
> java.lang.RuntimeException: parameter v not on stack
>   at org.apache.calcite.linq4j.tree.Evaluator.peek(Evaluator.java:51)
>   at 
> org.apache.calcite.linq4j.tree.ParameterExpression.evaluate(ParameterExpression.java:55)
>   at 
> org.apache.calcite.linq4j.tree.GotoStatement.evaluate(GotoStatement.java:97)
>   at 
> org.apache.calcite.linq4j.tree.BlockStatement.evaluate(BlockStatement.java:83)
>   at org.apache.calcite.linq4j.tree.Evaluator.evaluate(Evaluator.java:55)
>   at 
> org.apache.calcite.linq4j.tree.FunctionExpression.lambda$compile$0(FunctionExpression.java:87)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslatorTest.testRawTranslateRexNode(RexToLixTranslatorTest.java:57)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-2979) Add a batch-based nested loop join algorithm

2019-08-21 Thread Stamatis Zampetakis (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stamatis Zampetakis updated CALCITE-2979:
-
Summary: Add a batch-based nested loop join algorithm  (was: Add a 
block-based nested loop join algorithm)

> Add a batch-based nested loop join algorithm
> 
>
> Key: CALCITE-2979
> URL: https://issues.apache.org/jira/browse/CALCITE-2979
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Stamatis Zampetakis
>Assignee: Khawla Mouhoubi
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Currently, Calcite provides a tuple-based nested loop join algorithm 
> implemented through EnumerableCorrelate and EnumerableDefaults.correlateJoin. 
> This means that for each tuple of the outer relation we probe (set variables) 
> in the inner relation.
> The goal of this issue is to add new algorithm (or extend the correlateJoin 
> method) which first gathers blocks (batches) of tuples from the outer 
> relation and then probes the inner relation once per block.
> There are cases (eg., indexes) where the inner relation can be accessed by 
> more than one value which can greatly improve the performance in particular 
> when the outer relation is big.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3122) Convert Pig Latin scripts into Calcite logical plan

2019-08-21 Thread Khai Tran (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912744#comment-16912744
 ] 

Khai Tran commented on CALCITE-3122:


"Why did you comment out {{validateGroupList}}?" => because Pig Grunt Parser 
already validates the plan, we dont need to do it again here.

> Convert Pig Latin scripts into Calcite logical plan 
> 
>
> Key: CALCITE-3122
> URL: https://issues.apache.org/jira/browse/CALCITE-3122
> Project: Calcite
>  Issue Type: New Feature
>  Components: core, piglet
>Reporter: Khai Tran
>Assignee: Julian Hyde
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We create an internal Calcite repo at LinkedIn and develop APIs to parse any 
> Pig Latin scripts into Calcite logical plan. The code was tested in nearly 
> ~1000 Pig scripts written at LinkedIn.
> Changes:
> 1. piglet: main conversion code live there, include:
>  * APIs to convert any Pig scripts into RelNode plans or SQL statements
>  * Use Pig Grunt parser to parse Pig Latin scripts into Pig logical plan 
> (DAGs)
>  * Convert Pig schemas into RelDatatype
>  * Traverse through Pig expression plan and convert Pig expressions into 
> RexNodes
>  * Map some basic Pig UDFs to Calcite SQL operators
>  * Build Calcite UDFs for any other Pig UDFs, including UDFs written in both 
> Java and Python
>  * Traverse (DFS) through Pig logical plans to convert each Pig logical nodes 
> to RelNodes
>  * Have an optimizer rule to optimize Pig group/cogroup into Aggregate 
> operators
> 2. core:
>  * Implement other RelNode in Rel2Sql so that Pig can be translated into SQL
>  * Other minor changes in a few other classes to make Pig to Calcite works



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3220) In JDBC adapter, when generating SQL for Hive, transform TRIM function to TRIM, LTRIM or RTRIM

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3220:
-
Description: 
Let's assume sql = SELECT TRIM(' str ')

When we use HiveSqlDialect and transform "sql", we expect SELECT TRIM(' str 
'),but get SELECT TRIM(BOTH ' ' FROM ' str ') which is incorrect  sql format in 
hive.

So maybe HiveSqlDialect behavior should be changed when transform function trim:
 # {{SELECT TRIM(' str ')  =>  SELECT TRIM(' str ') }}
 # {{SELECT TRIM(BOTH ' ' from ' str ') => SELECT TRIM(' str ')}}
 # {{SELECT TRIM(LEADING ' ' from ' str ') => SELECT LTRIM(' str ')}}
 # {{SELECT TRIM(TRAILING ' ' from ' str ')=>  SELECT RTRIM(' str ') }}


  was:
Let's assume sql = SELECT TRIM(' str ')

When we use HiveSqlDialect and transform "sql", we expect SELECT TRIM(' str 
'),but get SELECT TRIM(BOTH ' ' FROM ' str ') which is incorrect  sql format in 
hive.

So maybe HiveSqlDialect behavior should be changed when transform function trim:
 # {{SELECT TRIM(' str ')  =>  SELECT TRIM(' str ') }}
 # {{SELECT TRIM(BOTH ' ' from ' str ') => SELECT TRIM(' str ')}}
 # {{SELECT TRIM(LEADING ' ' from ' str ') => SELECT LTRIM(' str ')}}
 # {{SELECT TRIM(TRAILING ' ' from ' str ')=>  SELECT RTRIM(' str ') }}

{{This is the linked github pr #1342  
[https://github.com/apache/calcite/pull/1342]}}

 


> In JDBC adapter, when generating SQL for Hive, transform TRIM function to 
> TRIM, LTRIM or RTRIM
> --
>
> Key: CALCITE-3220
> URL: https://issues.apache.org/jira/browse/CALCITE-3220
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Jacky Woo
>Assignee: Julian Hyde
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Let's assume sql = SELECT TRIM(' str ')
> When we use HiveSqlDialect and transform "sql", we expect SELECT TRIM(' str 
> '),but get SELECT TRIM(BOTH ' ' FROM ' str ') which is incorrect  sql format 
> in hive.
> So maybe HiveSqlDialect behavior should be changed when transform function 
> trim:
>  # {{SELECT TRIM(' str ')  =>  SELECT TRIM(' str ') }}
>  # {{SELECT TRIM(BOTH ' ' from ' str ') => SELECT TRIM(' str ')}}
>  # {{SELECT TRIM(LEADING ' ' from ' str ') => SELECT LTRIM(' str ')}}
>  # {{SELECT TRIM(TRAILING ' ' from ' str ')=>  SELECT RTRIM(' str ') }}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3220) In JDBC adapter, when generating SQL for Hive, transform TRIM function to TRIM, LTRIM or RTRIM

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3220:
-
Summary: In JDBC adapter, when generating SQL for Hive, transform TRIM 
function to TRIM, LTRIM or RTRIM  (was: wrong sql format when transforming 
function SUBSTRING to hive sql)

> In JDBC adapter, when generating SQL for Hive, transform TRIM function to 
> TRIM, LTRIM or RTRIM
> --
>
> Key: CALCITE-3220
> URL: https://issues.apache.org/jira/browse/CALCITE-3220
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Jacky Woo
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Let's assume sql = SELECT TRIM(' str ')
> When we use HiveSqlDialect and transform "sql", we expect SELECT TRIM(' str 
> '),but get SELECT TRIM(BOTH ' ' FROM ' str ') which is incorrect  sql format 
> in hive.
> So maybe HiveSqlDialect behavior should be changed when transform function 
> trim:
>  # {{SELECT TRIM(' str ')  =>  SELECT TRIM(' str ') }}
>  # {{SELECT TRIM(BOTH ' ' from ' str ') => SELECT TRIM(' str ')}}
>  # {{SELECT TRIM(LEADING ' ' from ' str ') => SELECT LTRIM(' str ')}}
>  # {{SELECT TRIM(TRAILING ' ' from ' str ')=>  SELECT RTRIM(' str ') }}
> {{This is the linked github pr #1342  
> [https://github.com/apache/calcite/pull/1342]}}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3220) In JDBC adapter, when generating SQL for Hive, transform TRIM function to TRIM, LTRIM or RTRIM

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3220:
-
Fix Version/s: 1.21.0

> In JDBC adapter, when generating SQL for Hive, transform TRIM function to 
> TRIM, LTRIM or RTRIM
> --
>
> Key: CALCITE-3220
> URL: https://issues.apache.org/jira/browse/CALCITE-3220
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Jacky Woo
>Assignee: Julian Hyde
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Let's assume sql = SELECT TRIM(' str ')
> When we use HiveSqlDialect and transform "sql", we expect SELECT TRIM(' str 
> '),but get SELECT TRIM(BOTH ' ' FROM ' str ') which is incorrect  sql format 
> in hive.
> So maybe HiveSqlDialect behavior should be changed when transform function 
> trim:
>  # {{SELECT TRIM(' str ')  =>  SELECT TRIM(' str ') }}
>  # {{SELECT TRIM(BOTH ' ' from ' str ') => SELECT TRIM(' str ')}}
>  # {{SELECT TRIM(LEADING ' ' from ' str ') => SELECT LTRIM(' str ')}}
>  # {{SELECT TRIM(TRAILING ' ' from ' str ')=>  SELECT RTRIM(' str ') }}
> {{This is the linked github pr #1342  
> [https://github.com/apache/calcite/pull/1342]}}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3220) In JDBC adapter, when generating SQL for Hive, transform TRIM function to TRIM, LTRIM or RTRIM

2019-08-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912752#comment-16912752
 ] 

Julian Hyde commented on CALCITE-3220:
--

Reviewing now.

> In JDBC adapter, when generating SQL for Hive, transform TRIM function to 
> TRIM, LTRIM or RTRIM
> --
>
> Key: CALCITE-3220
> URL: https://issues.apache.org/jira/browse/CALCITE-3220
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Jacky Woo
>Assignee: Julian Hyde
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Let's assume sql = SELECT TRIM(' str ')
> When we use HiveSqlDialect and transform "sql", we expect SELECT TRIM(' str 
> '),but get SELECT TRIM(BOTH ' ' FROM ' str ') which is incorrect  sql format 
> in hive.
> So maybe HiveSqlDialect behavior should be changed when transform function 
> trim:
>  # {{SELECT TRIM(' str ')  =>  SELECT TRIM(' str ') }}
>  # {{SELECT TRIM(BOTH ' ' from ' str ') => SELECT TRIM(' str ')}}
>  # {{SELECT TRIM(LEADING ' ' from ' str ') => SELECT LTRIM(' str ')}}
>  # {{SELECT TRIM(TRAILING ' ' from ' str ')=>  SELECT RTRIM(' str ') }}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (CALCITE-3220) In JDBC adapter, when generating SQL for Hive, transform TRIM function to TRIM, LTRIM or RTRIM

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde reassigned CALCITE-3220:


Assignee: Julian Hyde

> In JDBC adapter, when generating SQL for Hive, transform TRIM function to 
> TRIM, LTRIM or RTRIM
> --
>
> Key: CALCITE-3220
> URL: https://issues.apache.org/jira/browse/CALCITE-3220
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Jacky Woo
>Assignee: Julian Hyde
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Let's assume sql = SELECT TRIM(' str ')
> When we use HiveSqlDialect and transform "sql", we expect SELECT TRIM(' str 
> '),but get SELECT TRIM(BOTH ' ' FROM ' str ') which is incorrect  sql format 
> in hive.
> So maybe HiveSqlDialect behavior should be changed when transform function 
> trim:
>  # {{SELECT TRIM(' str ')  =>  SELECT TRIM(' str ') }}
>  # {{SELECT TRIM(BOTH ' ' from ' str ') => SELECT TRIM(' str ')}}
>  # {{SELECT TRIM(LEADING ' ' from ' str ') => SELECT LTRIM(' str ')}}
>  # {{SELECT TRIM(TRAILING ' ' from ' str ')=>  SELECT RTRIM(' str ') }}
> {{This is the linked github pr #1342  
> [https://github.com/apache/calcite/pull/1342]}}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2302) Implicit type cast support

2019-08-21 Thread Danny Chan (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912839#comment-16912839
 ] 

Danny Chan commented on CALCITE-2302:
-

For "9/2 returns 4.5", we indeed do the type coercion because we need to coerce 
all the operands of "/" to double. So this change definitely belongs to the 
implicit type coercion scope.

> Implicit type cast support
> --
>
> Key: CALCITE-2302
> URL: https://issues.apache.org/jira/browse/CALCITE-2302
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.17.0
>Reporter: Danny Chan
>Assignee: Danny Chan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Now many DBs have support implicit type cast, eg: SqlServer, Oracle, Hive.
> Implicit type cast is an useful function for many cases, So we should support 
> this.
> I checkout Calcite code and found that:
>  # Now we use a validator to validate our operands types[ through kinds of 
> namespaces and scopes ]
>  # Most of the validations will finally goes to
> {code:java}
> SqlOperator.validateOperands
> {code}
>  # which will use validation logic defined in corresponding 
> SqlOperandTypeChecker
> What i'm confused about is where should i put the implicit type cast logic 
> in? I figured out 2 ways:
>  # Supply a tool class/rules to add casts into a parsed SqlNode tree which 
> will then go through the validation logic later on.
>  # Unleash the validation logic in kinds of SqlOperandTypeChecker, then 
> modify the RelNode/RexNodes tree converted from a validated SqlNode tree to 
> add in casts through custom RelOptRules.
> So guys, which of the 2 ways should i go, or if there are better way to do 
> this?
> I need your help.
>  
> Updated 18-05-30:
> Hi guys, i have made a PR in 
> [CALCITE-2302|https://github.com/apache/calcite/pull/706]
> This is design doc: [Calcite Implicit Type Cast 
> Design|https://docs.google.com/document/d/1g2RUnLXyp_LjUlO-wbblKuP5hqEu3a_2Mt2k4dh6RwU/edit?usp=sharing].
> This is the conversion types mapping: [Conversion Types 
> Mapping|https://docs.google.com/spreadsheets/d/1GhleX5h5W8-kJKh7NMJ4vtoE78pwfaZRJl88ULX_MgU/edit?usp=sharing].
> I really appreciate your suggestions, thx.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2979) Add a block-based nested loop join algorithm

2019-08-21 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912715#comment-16912715
 ] 

Stamatis Zampetakis commented on CALCITE-2979:
--

I also had a look in the PR sometime ago and it was in good shape so +1 for 
pushing this to 1.21.0. 

> Add a block-based nested loop join algorithm
> 
>
> Key: CALCITE-2979
> URL: https://issues.apache.org/jira/browse/CALCITE-2979
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Stamatis Zampetakis
>Assignee: Khawla Mouhoubi
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Currently, Calcite provides a tuple-based nested loop join algorithm 
> implemented through EnumerableCorrelate and EnumerableDefaults.correlateJoin. 
> This means that for each tuple of the outer relation we probe (set variables) 
> in the inner relation.
> The goal of this issue is to add new algorithm (or extend the correlateJoin 
> method) which first gathers blocks (batches) of tuples from the outer 
> relation and then probes the inner relation once per block.
> There are cases (eg., indexes) where the inner relation can be accessed by 
> more than one value which can greatly improve the performance in particular 
> when the outer relation is big.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (CALCITE-3277) calcite-avatica-go: panic: proto: can't skip unknown wire type 4

2019-08-21 Thread Francis Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912739#comment-16912739
 ] 

Francis Chuang edited comment on CALCITE-3277 at 8/21/19 10:26 PM:
---

I haven't used druid before, but I noticed the following in the [druid 
docs|[https://druid.apache.org/docs/latest/querying/sq|https://druid.apache.org/docs/latest/querying/sql]]:
 * `druid.sql.avatica.enable` enables JDBC querying using 
`/druid/v2/sql/avatica/` and defaults to true. The Go avatica driver does not 
implement JDBC, rather, it implements protobuf over HTTP.
 * `druid.sql.http.enable` enables JSON over HTTP and defaults to true. As the 
Go avatica driver does not use JSON, it would fail. 

I'd suggest adding a debug statement here: 
[https://github.com/apache/calcite-avatica-go/blob/master/http_client.go#L136] 
, to see the raw response from the server. Something like 
`fmt.Println(response)` would do.

 


was (Author: francischuang):
I haven't used druid before, but I noticed the following in the [druid 
docs|[https://druid.apache.org/docs/latest/querying/sql]]:
 * `druid.sql.avatica.enable` enables JDBC querying using 
`/druid/v2/sql/avatica/` and defaults to true. The Go avatica driver does not 
implement JDBC, rather, it implements protobuf over HTTP.
 * `druid.sql.http.enable` enables JSON over HTTP and defaults to true. As the 
Go avatica driver does not use JSON, it would fail. 

I'd suggest adding a debug statement here: 
[https://github.com/apache/calcite-avatica-go/blob/master/http_client.go#L136] 
, to see the raw response from the server. Something like 
`fmt.Println(response)` would do.

 

> calcite-avatica-go: panic: proto: can't skip unknown wire type 4
> 
>
> Key: CALCITE-3277
> URL: https://issues.apache.org/jira/browse/CALCITE-3277
> Project: Calcite
>  Issue Type: Bug
>  Components: avatica-go
>Reporter: Shurmin Evgeniy
>Assignee: Francis Chuang
>Priority: Critical
> Fix For: avatica-go-5.0.0
>
>
> I can't perform simple query to druid using 
> {{github.com/apache/calcite-avatica-go. }}
> Code:
> {code:java}
> package main
> import (
>   "database/sql"
>   "fmt"
>   _ "github.com/apache/calcite-avatica-go/v4"
> )
> func main() {
>   db, err := sql.Open("avatica", 
> "http://:/druid/v2/sql/avatica/;)
>   if err != nil { panic(err) }
>   rows, err := db.Query(`SELECT * FROM sys.servers`)
>   if err != nil { panic(err) }
>   defer func() {
>   if err := rows.Close(); err != nil { panic(err) }
>   }()
>   for rows.Next() {
>   var server, host float64
>   err = rows.Scan(, )
>   if err != nil { panic(err) }
>   fmt.Printf("server: %v, host: %v\n", server, host)
>   }
> }
> {code}
> Console:
> {{panic: proto: can't skip unknown wire type 4}}
>  {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
>  {{Process finished with exit code 2}}
> Golang:
> {{go version go1.12.7 darwin/amd64}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3277) calcite-avatica-go: panic: proto: can't skip unknown wire type 4

2019-08-21 Thread Francis Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912739#comment-16912739
 ] 

Francis Chuang commented on CALCITE-3277:
-

I haven't used druid before, but I noticed the following in the [druid 
docs|[https://druid.apache.org/docs/latest/querying/sql]]:
 * `druid.sql.avatica.enable` enables JDBC querying using 
`/druid/v2/sql/avatica/` and defaults to true. The Go avatica driver does not 
implement JDBC, rather, it implements protobuf over HTTP.
 * `druid.sql.http.enable` enables JSON over HTTP and defaults to true. As the 
Go avatica driver does not use JSON, it would fail. 

I'd suggest adding a debug statement here: 
[https://github.com/apache/calcite-avatica-go/blob/master/http_client.go#L136] 
, to see the raw response from the server. Something like 
`fmt.Println(response)` would do.

 

> calcite-avatica-go: panic: proto: can't skip unknown wire type 4
> 
>
> Key: CALCITE-3277
> URL: https://issues.apache.org/jira/browse/CALCITE-3277
> Project: Calcite
>  Issue Type: Bug
>  Components: avatica-go
>Reporter: Shurmin Evgeniy
>Assignee: Francis Chuang
>Priority: Critical
> Fix For: avatica-go-5.0.0
>
>
> I can't perform simple query to druid using 
> {{github.com/apache/calcite-avatica-go. }}
> Code:
> {code:java}
> package main
> import (
>   "database/sql"
>   "fmt"
>   _ "github.com/apache/calcite-avatica-go/v4"
> )
> func main() {
>   db, err := sql.Open("avatica", 
> "http://:/druid/v2/sql/avatica/;)
>   if err != nil { panic(err) }
>   rows, err := db.Query(`SELECT * FROM sys.servers`)
>   if err != nil { panic(err) }
>   defer func() {
>   if err := rows.Close(); err != nil { panic(err) }
>   }()
>   for rows.Next() {
>   var server, host float64
>   err = rows.Scan(, )
>   if err != nil { panic(err) }
>   fmt.Printf("server: %v, host: %v\n", server, host)
>   }
> }
> {code}
> Console:
> {{panic: proto: can't skip unknown wire type 4}}
>  {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
>  {{Process finished with exit code 2}}
> Golang:
> {{go version go1.12.7 darwin/amd64}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3247) wrong sql format when transforming function SUBSTRING to hive sql

2019-08-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912759#comment-16912759
 ] 

Julian Hyde commented on CALCITE-3247:
--

Can you rework the description and commit message to be consistent with 
CALCITE-3220.

> wrong sql format when transforming function SUBSTRING to hive sql
> -
>
> Key: CALCITE-3247
> URL: https://issues.apache.org/jira/browse/CALCITE-3247
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Jacky Woo
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Let's assume sql = SELECT SUBSTRING('ABC', 2)
> When we use HiveSqlDialect and transform "sql", we expect SUBSTRING(' abc', 
> 2),but get SUBSTRING(' abc' FROM 2) which is incorrect  sql format in hive.
> So maybe HiveSqlDialect behavior should be changed when transform function 
> SUBSTRING:
>  # {{SELECT SUBSTRING('ABC', 2)  =>  SELECT SUBSTRING('ABC', 2)}}
>  # {{SELECT SUBSTRING('ABC', 2, 3)  =>  SELECT SUBSTRING('ABC', 2, 3) }}
>  # {{SELECT SUBSTRING('ABC' FROM 2) => SELECT SUBSTRING('ABC', 2) }}
>  # {{SELECT SUBSTRING('ABC' FROM 2 FOR 3) => SELECT SUBSTRING('ABC', 2, 3) }}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3280) Cannot parse query REGEXP_REPLACE in Redshift

2019-08-21 Thread Ryan Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Fu updated CALCITE-3280:
-
Description: 
REGEXP_REPLACE error:

No match found for function signature REGEXP_REPLACE(, , 
)

 

Example query:

 
{code:java}
SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM public.account 
{code}

  was:
REGEXP_REPLACE error:

No match found for function signature REGEXP_REPLACE(, , 
)

 

Example query:

 

{{}}
{code:java}

{code}
{{ SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|\\,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM 
public.account}}

 


> Cannot parse query REGEXP_REPLACE in Redshift
> -
>
> Key: CALCITE-3280
> URL: https://issues.apache.org/jira/browse/CALCITE-3280
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Ryan Fu
>Priority: Minor
>
> REGEXP_REPLACE error:
> No match found for function signature REGEXP_REPLACE(, 
> , )
>  
> Example query:
>  
> {code:java}
> SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
> '([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM 
> public.account {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (CALCITE-3280) Cannot parse query REGEXP_REPLACE in Redshift

2019-08-21 Thread Ryan Fu (Jira)
Ryan Fu created CALCITE-3280:


 Summary: Cannot parse query REGEXP_REPLACE in Redshift
 Key: CALCITE-3280
 URL: https://issues.apache.org/jira/browse/CALCITE-3280
 Project: Calcite
  Issue Type: Improvement
Reporter: Ryan Fu


REGEXP_REPLACE error:

No match found for function signature REGEXP_REPLACE(, , 
)

 

Example query:

 

{{}}
{code:java}

{code}
{{ SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|\\,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM 
public.account}}

 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3280) Cannot parse query REGEXP_REPLACE in Redshift

2019-08-21 Thread Ryan Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Fu updated CALCITE-3280:
-
Description: 
REGEXP_REPLACE error:
{code:}
No match found for function signature REGEXP_REPLACE(, , 
){code}
 

Example query:
{code:sql}
SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM public.account 
{code}

  was:
REGEXP_REPLACE error:
{code:java}
No match found for function signature REGEXP_REPLACE(, , 
){code}
 

Example query:
{code:java}
SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
'([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM public.account 
{code}


> Cannot parse query REGEXP_REPLACE in Redshift
> -
>
> Key: CALCITE-3280
> URL: https://issues.apache.org/jira/browse/CALCITE-3280
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Ryan Fu
>Priority: Minor
>
> REGEXP_REPLACE error:
> {code:}
> No match found for function signature REGEXP_REPLACE(, 
> , ){code}
>  
> Example query:
> {code:sql}
> SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
> '([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM 
> public.account {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3263) Add MD5, SHA1 SQL functions

2019-08-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912725#comment-16912725
 ] 

Julian Hyde commented on CALCITE-3263:
--

Reviewing now.

> Add MD5, SHA1 SQL functions
> ---
>
> Key: CALCITE-3263
> URL: https://issues.apache.org/jira/browse/CALCITE-3263
> Project: Calcite
>  Issue Type: Improvement
>Reporter: ShuMing Li
>Assignee: Julian Hyde
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> `MD5/SHA1` functions are common UDFs in many SQL engines. We may support them 
> in SQL just like `from_base64`/`to_base64`. 
> h3. A Review of Other Databases
>  * BigQuery : 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#md5]
>  ** Function : MD5(String/Bytes)
>  ** Input : String/Bytes
>  ** Output : Bytes
>  * MySQL : [https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : String
>  * Oracle : 
> [https://docs.oracle.com/database/121/SQLRF/functions183.htm#SQLRF55647]
>  ** Function : STANDARD_HASH(expr, method)
>  ** Input : String
>  ** Output : RAW
>  * PostgreSQL : 
> [https://www.postgresql.org/docs/current/functions-string.html 
> |https://www.postgresql.org/docs/current/functions-string.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : Text
>  * Redshift : [https://docs.aws.amazon.com/redshift/latest/dg/r_MD5.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : String



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (CALCITE-3263) Add MD5, SHA1 SQL functions

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde reassigned CALCITE-3263:


Assignee: Julian Hyde

> Add MD5, SHA1 SQL functions
> ---
>
> Key: CALCITE-3263
> URL: https://issues.apache.org/jira/browse/CALCITE-3263
> Project: Calcite
>  Issue Type: Improvement
>Reporter: ShuMing Li
>Assignee: Julian Hyde
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> `MD5/SHA1` functions are common UDFs in many SQL engines. We may support them 
> in SQL just like `from_base64`/`to_base64`. 
> h3. A Review of Other Databases
>  * BigQuery : 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#md5]
>  ** Function : MD5(String/Bytes)
>  ** Input : String/Bytes
>  ** Output : Bytes
>  * MySQL : [https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : String
>  * Oracle : 
> [https://docs.oracle.com/database/121/SQLRF/functions183.htm#SQLRF55647]
>  ** Function : STANDARD_HASH(expr, method)
>  ** Input : String
>  ** Output : RAW
>  * PostgreSQL : 
> [https://www.postgresql.org/docs/current/functions-string.html 
> |https://www.postgresql.org/docs/current/functions-string.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : Text
>  * Redshift : [https://docs.aws.amazon.com/redshift/latest/dg/r_MD5.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : String



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3263) Add MD5, SHA1 SQL functions

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3263:
-
Fix Version/s: 1.21.0

> Add MD5, SHA1 SQL functions
> ---
>
> Key: CALCITE-3263
> URL: https://issues.apache.org/jira/browse/CALCITE-3263
> Project: Calcite
>  Issue Type: Improvement
>Reporter: ShuMing Li
>Assignee: Julian Hyde
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> `MD5/SHA1` functions are common UDFs in many SQL engines. We may support them 
> in SQL just like `from_base64`/`to_base64`. 
> h3. A Review of Other Databases
>  * BigQuery : 
> [https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#md5]
>  ** Function : MD5(String/Bytes)
>  ** Input : String/Bytes
>  ** Output : Bytes
>  * MySQL : [https://dev.mysql.com/doc/refman/8.0/en/encryption-functions.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : String
>  * Oracle : 
> [https://docs.oracle.com/database/121/SQLRF/functions183.htm#SQLRF55647]
>  ** Function : STANDARD_HASH(expr, method)
>  ** Input : String
>  ** Output : RAW
>  * PostgreSQL : 
> [https://www.postgresql.org/docs/current/functions-string.html 
> |https://www.postgresql.org/docs/current/functions-string.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : Text
>  * Redshift : [https://docs.aws.amazon.com/redshift/latest/dg/r_MD5.html]
>  ** Function : MD5(String)
>  ** Input : String
>  ** Output : String



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3280) Cannot parse query REGEXP_REPLACE in Redshift

2019-08-21 Thread ShuMing Li (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912861#comment-16912861
 ] 

ShuMing Li commented on CALCITE-3280:
-

Is somebody working this? In our product situation, we still need 
`REGEXP_REPLACE` UDF to parse SQL. If nobody is working this, can I help to 
work it?

> Cannot parse query REGEXP_REPLACE in Redshift
> -
>
> Key: CALCITE-3280
> URL: https://issues.apache.org/jira/browse/CALCITE-3280
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Ryan Fu
>Priority: Minor
>
> REGEXP_REPLACE error:
> {code:}
> No match found for function signature REGEXP_REPLACE(, 
> , ){code}
>  
> Example query:
> {code:sql}
> SELECT * , MD5(TRIM(BOTH ' ' FROM REGEXP_REPLACE(LOWER(name), 
> '([[:space:]]|,)+([iInNcC]|[lLcC]).*$', ''))) AS company_id FROM 
> public.account {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (CALCITE-3257) RelMetadataQuery cache is not invalidated when log trace is enabled

2019-08-21 Thread Haisheng Yuan (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haisheng Yuan resolved CALCITE-3257.

Fix Version/s: 1.21.0
   Resolution: Fixed

Fixed in 
https://github.com/apache/calcite/commit/ab97af39bd9438420cb5b212b95b23d9cb798f0d,
 thanks for the PR, [~xndai]!

> RelMetadataQuery cache is not invalidated when log trace is enabled
> ---
>
> Key: CALCITE-3257
> URL: https://issues.apache.org/jira/browse/CALCITE-3257
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Xiening Dai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> To repro -
> 1. Set Log4J log level to TRACE. So planner will dump rel node info every 
> time at rule match 
> (https://github.com/apache/calcite/blob/3124a85b93ff2f1b79484c7bd4cc41835d4f1920/core/src/main/java/org/apache/calcite/plan/volcano/RuleQueue.java#L435)
> 2. Run JdbcTest.testNotExistsCorrelated. Get below exception -
> java.lang.AssertionError: rel 
> [rel#63:EnumerableAggregate.ENUMERABLE.[](input=RelSubset#62,group={0})] has 
> lower cost {131.0 rows, 216.0 cpu, 0.0 io} than best cost {131.5 rows, 216.0 
> cpu, 0.0 io} of subset [rel#60:Subset#4.ENUMERABLE.[]]
>   at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.isValid(VolcanoPlanner.java:889)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:852)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:869)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1928)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:129)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:236)
>   at 
> org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:208)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:631)
>   at 
> org.apache.calcite.tools.Programs.lambda$standard$3(Programs.java:286)
>   at 
> org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:346)
>   at org.apache.calcite.prepare.Prepare.optimize(Prepare.java:189)
>   at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:314)
>   at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:231)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:638)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:502)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:472)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:231)
>   at 
> org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:550)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:227)
>   at 
> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:522)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.lambda$returns$1(CalciteAssert.java:1466)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.withConnection(CalciteAssert.java:1398)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.returns(CalciteAssert.java:1464)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.explainMatches(CalciteAssert.java:1561)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.explainContains(CalciteAssert.java:1556)
>   at 
> org.apache.calcite.test.JdbcTest.testNotExistsCorrelated(JdbcTest.java:4562)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> 

[jira] [Resolved] (CALCITE-3111) Add RelBuilder.correlate, and allow custom implementations of Correlate in RelDecorrelator

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde resolved CALCITE-3111.
--
Fix Version/s: 1.21.0
   Resolution: Fixed

Fixed in 
[6f6d|https://github.com/apache/calcite/commit/6f6d03bca7cd97d151033c5d82f24394e229];
 thanks for the PR, [~Juhwan]!

> Add RelBuilder.correlate, and allow custom implementations of Correlate in 
> RelDecorrelator
> --
>
> Key: CALCITE-3111
> URL: https://issues.apache.org/jira/browse/CALCITE-3111
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Reporter: Juhwan Kim
>Assignee: Juhwan Kim
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Currently, RelDecorrelator code only works for LogicalCorrelate. 
> Decorrelating through Calcite would become much more flexible if it allows 
> using custom implementations of Correlate. This would require refactoring all 
> logical rels used in RelDecorrelator to the abstract ones(e.g 
> LogicalCorrelate -> Correlate).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (CALCITE-3262) Refine doc of SubstitutionVisitor.java

2019-08-21 Thread Haisheng Yuan (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haisheng Yuan resolved CALCITE-3262.

Fix Version/s: 1.21.0
   Resolution: Fixed

Fixed in 
https://github.com/apache/calcite/commit/9fd9c822287751ed1f9e6a10adc3f50b0cea0a54.

> Refine doc of SubstitutionVisitor.java
> --
>
> Key: CALCITE-3262
> URL: https://issues.apache.org/jira/browse/CALCITE-3262
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Reporter: jin xing
>Assignee: jin xing
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Current doc of {{SubstitutionVisitor.java}} says the supported core 
> relational operators are {{@link 
> org.apache.calcite.rel.logical.LogicalTableScan}}, and so on.
> But with {{convertTableAccess=true}} 
> (https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/sql2rel/SqlToRelConverter.java#L5636),
>  it's a {{EnumerableTableScan}} under {{MutableScan}}, which is inconsistent 
> with the doc. 
> And what's more, {{MutableRels}} and {{SubstitutionVisitor}} supporting scope 
> doesn't limit to be org.apache.calcite.rel.logical.LogicalXXX.
> So I think it might make sense to update/refine the doc  to say that the 
> supported core relational operators are 
> {code:java}
>  * {@link org.apache.calcite.rel.core.TableScan},
>  * {@link org.apache.calcite.rel.core.Filter},
>  * {@link org.apache.calcite.rel.core.Project},
>  * {@link org.apache.calcite.rel.core.Join},
>  * {@link org.apache.calcite.rel.core.Union},
>  * {@link org.apache.calcite.rel.core.Aggregate}.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (CALCITE-3122) Convert Pig Latin scripts into Calcite logical plan

2019-08-21 Thread Khai Tran (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912542#comment-16912542
 ] 

Khai Tran edited comment on CALCITE-3122 at 8/21/19 5:40 PM:
-

[~julianhyde] Fixed and pushed the code to address all your comments on 03/Aug. 
My answers for a few other questions:
 "Can you expand/clarify the javadoc of VirtualTable? I don't really understand 
what it is or why you need it" => These are tables needed for constructing 
Calcite plan from Pig DAG, schemas (or row types, to be precise) of these table 
are obtained by converting Pig schema into RelDataType. These tables are not 
queriable, scannable, just for the sake of represent table schemas used for 
other transformations on top of that.

The reason I named it VirtualTable and moved it to core because I need to use 
it later for other use cases at LinkedIn. For example, we parse GraphQL query 
to Calcite plan and convert it into SparkSQL for batch execution. So we may 
have a full story of online, nearline, and offline convergence with Calcite 
relational algebra as an IR. I may present this during my talk at ApacheCon 
next month.

Anyway, I rename it to PigTable and move it back to Piglet for now so that we 
can proceed.

Will work on two remaining issues (test for ToLogicalPlan and the planner 
issue) later today.


was (Author: khaitran):
[~julianhyde] Fixed and pushed the code to address all your comments on 03/Aug. 
My answers for a few other questions:
"Can you expand/clarify the javadoc of VirtualTable? I don't really understand 
what it is or why you need it" => These are tables needed for constructing 
Calcite plan from Pig DAG, schemas (or row types, to be precise) of these table 
are obtained by converting Pig schema into RelDataType. These tables are not 
queriable, scannable, just for the sake of represent table schemas used for 
other transformations on top of that.

The reason I named it VirtualTable and moved it to core because I need to use 
it later for other use cases at LinkedIn. For example, we parse GraphQL query 
to Calcite plan and convert it into SparkSQL for batch execution. So we may 
have a full story of online, nearline, and offline convergence with Calcite 
relational algebra as an IR. I may present this during my talk at ApacheCon 
next month.

Anyway, I rename it to PigTable and move it back to Piglet for now so that we 
can proceed.

> Convert Pig Latin scripts into Calcite logical plan 
> 
>
> Key: CALCITE-3122
> URL: https://issues.apache.org/jira/browse/CALCITE-3122
> Project: Calcite
>  Issue Type: New Feature
>  Components: core, piglet
>Reporter: Khai Tran
>Assignee: Julian Hyde
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We create an internal Calcite repo at LinkedIn and develop APIs to parse any 
> Pig Latin scripts into Calcite logical plan. The code was tested in nearly 
> ~1000 Pig scripts written at LinkedIn.
> Changes:
> 1. piglet: main conversion code live there, include:
>  * APIs to convert any Pig scripts into RelNode plans or SQL statements
>  * Use Pig Grunt parser to parse Pig Latin scripts into Pig logical plan 
> (DAGs)
>  * Convert Pig schemas into RelDatatype
>  * Traverse through Pig expression plan and convert Pig expressions into 
> RexNodes
>  * Map some basic Pig UDFs to Calcite SQL operators
>  * Build Calcite UDFs for any other Pig UDFs, including UDFs written in both 
> Java and Python
>  * Traverse (DFS) through Pig logical plans to convert each Pig logical nodes 
> to RelNodes
>  * Have an optimizer rule to optimize Pig group/cogroup into Aggregate 
> operators
> 2. core:
>  * Implement other RelNode in Rel2Sql so that Pig can be translated into SQL
>  * Other minor changes in a few other classes to make Pig to Calcite works



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3111) Add RelBuilder.correlate, and allow custom implementations of Correlate in RelDecorrelator

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3111:
-
Summary: Add RelBuilder.correlate, and allow custom implementations of 
Correlate in RelDecorrelator  (was: Allow custom implementations of Correlate 
in RelDecorrelator)

> Add RelBuilder.correlate, and allow custom implementations of Correlate in 
> RelDecorrelator
> --
>
> Key: CALCITE-3111
> URL: https://issues.apache.org/jira/browse/CALCITE-3111
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Reporter: Juhwan Kim
>Assignee: Juhwan Kim
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Currently, RelDecorrelator code only works for LogicalCorrelate. 
> Decorrelating through Calcite would become much more flexible if it allows 
> using custom implementations of Correlate. This would require refactoring all 
> logical rels used in RelDecorrelator to the abstract ones(e.g 
> LogicalCorrelate -> Correlate).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (CALCITE-3235) Add CONCAT function for Redshift

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde resolved CALCITE-3235.
--
Fix Version/s: 1.21.0
   Resolution: Fixed

Fixed in 
[79b97b62|https://github.com/apache/calcite/commit/79b97b628ac43e56873029b72a18e7bec0d1e7db];
 thanks for the PR, [~fib-seq]!

> Add CONCAT function for Redshift
> 
>
> Key: CALCITE-3235
> URL: https://issues.apache.org/jira/browse/CALCITE-3235
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Ryan Fu
>Assignee: Julian Hyde
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Get this error:
> {{No match found for function signature CONCAT(, , 
> ...)}}
> When using CONCAT, e.g.
> {{SELECT CONCAT('a', city) FROM public.aircraft}}
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (CALCITE-3089) Deprecate EquiJoin

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde resolved CALCITE-3089.
--
Resolution: Fixed

Fixed in 
[36e3109f|https://github.com/apache/calcite/commit/36e3109fb524c13ca0a08e3ad585785aa5abc18b];
 thanks for the PR, [~hyuan]!

> Deprecate EquiJoin
> --
>
> Key: CALCITE-3089
> URL: https://issues.apache.org/jira/browse/CALCITE-3089
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Haisheng Yuan
>Assignee: Haisheng Yuan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> EquiJoin should be replaced by Join with equiConds and nonEquiConds (empty if 
> it doesn't have).
> EquiJoin will not have any subclasses. EnumerableHashJoin and 
> EnumerableMergeJoin, SemiJoin should extend Join directly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3122) Convert Pig Latin scripts into Calcite logical plan

2019-08-21 Thread Khai Tran (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912542#comment-16912542
 ] 

Khai Tran commented on CALCITE-3122:


[~julianhyde] Fixed and pushed the code to address all your comments on 03/Aug. 
My answers for a few other questions:
"Can you expand/clarify the javadoc of VirtualTable? I don't really understand 
what it is or why you need it" => These are tables needed for constructing 
Calcite plan from Pig DAG, schemas (or row types, to be precise) of these table 
are obtained by converting Pig schema into RelDataType. These tables are not 
queriable, scannable, just for the sake of represent table schemas used for 
other transformations on top of that.

The reason I named it VirtualTable and moved it to core because I need to use 
it later for other use cases at LinkedIn. For example, we parse GraphQL query 
to Calcite plan and convert it into SparkSQL for batch execution. So we may 
have a full story of online, nearline, and offline convergence with Calcite 
relational algebra as an IR. I may present this during my talk at ApacheCon 
next month.

Anyway, I rename it to PigTable and move it back to Piglet for now so that we 
can proceed.

> Convert Pig Latin scripts into Calcite logical plan 
> 
>
> Key: CALCITE-3122
> URL: https://issues.apache.org/jira/browse/CALCITE-3122
> Project: Calcite
>  Issue Type: New Feature
>  Components: core, piglet
>Reporter: Khai Tran
>Assignee: Julian Hyde
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We create an internal Calcite repo at LinkedIn and develop APIs to parse any 
> Pig Latin scripts into Calcite logical plan. The code was tested in nearly 
> ~1000 Pig scripts written at LinkedIn.
> Changes:
> 1. piglet: main conversion code live there, include:
>  * APIs to convert any Pig scripts into RelNode plans or SQL statements
>  * Use Pig Grunt parser to parse Pig Latin scripts into Pig logical plan 
> (DAGs)
>  * Convert Pig schemas into RelDatatype
>  * Traverse through Pig expression plan and convert Pig expressions into 
> RexNodes
>  * Map some basic Pig UDFs to Calcite SQL operators
>  * Build Calcite UDFs for any other Pig UDFs, including UDFs written in both 
> Java and Python
>  * Traverse (DFS) through Pig logical plans to convert each Pig logical nodes 
> to RelNodes
>  * Have an optimizer rule to optimize Pig group/cogroup into Aggregate 
> operators
> 2. core:
>  * Implement other RelNode in Rel2Sql so that Pig can be translated into SQL
>  * Other minor changes in a few other classes to make Pig to Calcite works



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3257) RelMetadataQuery cache is not invalidated when log trace is enabled

2019-08-21 Thread Haisheng Yuan (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912539#comment-16912539
 ] 

Haisheng Yuan commented on CALCITE-3257:


Yes, I do think this is a separate issue with CALCITE-2018. +1 on the change.

> RelMetadataQuery cache is not invalidated when log trace is enabled
> ---
>
> Key: CALCITE-3257
> URL: https://issues.apache.org/jira/browse/CALCITE-3257
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Xiening Dai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> To repro -
> 1. Set Log4J log level to TRACE. So planner will dump rel node info every 
> time at rule match 
> (https://github.com/apache/calcite/blob/3124a85b93ff2f1b79484c7bd4cc41835d4f1920/core/src/main/java/org/apache/calcite/plan/volcano/RuleQueue.java#L435)
> 2. Run JdbcTest.testNotExistsCorrelated. Get below exception -
> java.lang.AssertionError: rel 
> [rel#63:EnumerableAggregate.ENUMERABLE.[](input=RelSubset#62,group={0})] has 
> lower cost {131.0 rows, 216.0 cpu, 0.0 io} than best cost {131.5 rows, 216.0 
> cpu, 0.0 io} of subset [rel#60:Subset#4.ENUMERABLE.[]]
>   at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.isValid(VolcanoPlanner.java:889)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:852)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:869)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1928)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:129)
>   at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:236)
>   at 
> org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141)
>   at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:208)
>   at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:631)
>   at 
> org.apache.calcite.tools.Programs.lambda$standard$3(Programs.java:286)
>   at 
> org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:346)
>   at org.apache.calcite.prepare.Prepare.optimize(Prepare.java:189)
>   at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:314)
>   at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:231)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:638)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:502)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:472)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:231)
>   at 
> org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:550)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:227)
>   at 
> org.apache.calcite.test.CalciteAssert.assertQuery(CalciteAssert.java:522)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.lambda$returns$1(CalciteAssert.java:1466)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.withConnection(CalciteAssert.java:1398)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.returns(CalciteAssert.java:1464)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.explainMatches(CalciteAssert.java:1561)
>   at 
> org.apache.calcite.test.CalciteAssert$AssertQuery.explainContains(CalciteAssert.java:1556)
>   at 
> org.apache.calcite.test.JdbcTest.testNotExistsCorrelated(JdbcTest.java:4562)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 

[jira] [Commented] (CALCITE-2979) Add a block-based nested loop join algorithm

2019-08-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912517#comment-16912517
 ] 

Julian Hyde commented on CALCITE-2979:
--

+1

> Add a block-based nested loop join algorithm
> 
>
> Key: CALCITE-2979
> URL: https://issues.apache.org/jira/browse/CALCITE-2979
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Stamatis Zampetakis
>Assignee: Khawla Mouhoubi
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Currently, Calcite provides a tuple-based nested loop join algorithm 
> implemented through EnumerableCorrelate and EnumerableDefaults.correlateJoin. 
> This means that for each tuple of the outer relation we probe (set variables) 
> in the inner relation.
> The goal of this issue is to add new algorithm (or extend the correlateJoin 
> method) which first gathers blocks (batches) of tuples from the outer 
> relation and then probes the inner relation once per block.
> There are cases (eg., indexes) where the inner relation can be accessed by 
> more than one value which can greatly improve the performance in particular 
> when the outer relation is big.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (CALCITE-3258) Upgrade jackson-databind from 2.9.9 to 2.9.9.3, and kafka-clients from 2.0.0 to 2.1.1

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde resolved CALCITE-3258.
--
Fix Version/s: 1.21.0
   Resolution: Fixed

Fixed in 
[a71def51|https://github.com/apache/calcite/commit/a71def51dbf3dc18af2e73418ed30e0f2e6addde].

> Upgrade jackson-databind from 2.9.9 to 2.9.9.3, and kafka-clients from 2.0.0 
> to 2.1.1
> -
>
> Key: CALCITE-3258
> URL: https://issues.apache.org/jira/browse/CALCITE-3258
> Project: Calcite
>  Issue Type: Bug
>Reporter: Julian Hyde
>Priority: Major
> Fix For: 1.21.0
>
>
> Upgrade jackson-databind from 2.9.9 to 2.9.9.2.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3258) Upgrade jackson-databind from 2.9.9 to 2.9.9.3, and kafka-clients from 2.0.0 to 2.1.1

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3258:
-
Summary: Upgrade jackson-databind from 2.9.9 to 2.9.9.3, and kafka-clients 
from 2.0.0 to 2.1.1  (was: Upgrade jackson-databind from 2.9.9 to 2.9.9.2)

> Upgrade jackson-databind from 2.9.9 to 2.9.9.3, and kafka-clients from 2.0.0 
> to 2.1.1
> -
>
> Key: CALCITE-3258
> URL: https://issues.apache.org/jira/browse/CALCITE-3258
> Project: Calcite
>  Issue Type: Bug
>Reporter: Julian Hyde
>Priority: Major
>
> Upgrade jackson-databind from 2.9.9 to 2.9.9.2.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (CALCITE-3252) Add CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions to Redshift operator library

2019-08-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde resolved CALCITE-3252.
--
Fix Version/s: 1.21.0
   Resolution: Fixed

Fixed in 
[e8b08c49|https://github.com/apache/calcite/commit/e8b08c490ab4270945c35f31846cd36b5788cc23];
 thanks for the PR, [~lindseycat]!

> Add CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions to Redshift operator 
> library
> --
>
> Key: CALCITE-3252
> URL: https://issues.apache.org/jira/browse/CALCITE-3252
> Project: Calcite
>  Issue Type: Bug
>Reporter: Lindsey Meyer
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Syntax error while parsing CONVERT_TIMEZONE, TO_DATE, TO_TIMESTAMP functions 
> on Redshift. Example queries that error for these functions:
> {code:java}
> SELECT CONVERT_TIMEZONE('UTC', 'America/Los_Angeles', CAST('2019-01-01 
> 01:00:00' AS TIMESTAMP) FROM mytable {code}
> {code:java}
> SELECT TO_DATE('2019-01-01', '-MM-DD') FROM mytable {code}
> {code:java}
> SELECT TO_TIMESTAMP('2019-01-01 01:00:00', '-MM-DD HH:MM:SS') FROM 
> mytable 
> {code}
> With errors like:
> {code:java}
> No match found for function signature CONVERT_TIMEZONE(, 
> , )
> {code}
> These are valid in Redshift and Postgres, except for CONVERT_TIMEZONE, which 
> I believe is only valid on Redshift.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (CALCITE-3138) RelStructuredTypeFlattener doesn't restructure ROW type fields

2019-08-21 Thread Volodymyr Vysotskyi (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Volodymyr Vysotskyi resolved CALCITE-3138.
--
Resolution: Fixed

Fixed in 
[1e62d3d|https://github.com/apache/calcite/commit/1e62d3d64fc217d14016702237b4f8d56b3683f2].

Thanks, [~IhorHuzenko], for fixing this issue and [~danny0405] for the review!

> RelStructuredTypeFlattener doesn't restructure ROW type fields 
> ---
>
> Key: CALCITE-3138
> URL: https://issues.apache.org/jira/browse/CALCITE-3138
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Haisheng Yuan
>Assignee: Igor Guzenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
> Attachments: ROW_repro.patch
>
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> 1) RelStructuredTypeFlattener.restructureFields(structType) doesn't support 
> ROW type. However, ROW type is flattened by 
> RelStructuredTypeFlattener just like struct. So when user queries one column 
> with complex type ROW, after flattening and restructuring 
> top level project returns reference to first inner primitive field of the 
> requested column. 
> 2) Another finding is related to ITEM expression applied to array of structs 
> column. For example, let's imagine a table with column of 
> type ARRAY>>. When user requests is SQL 
> array_column[1], Calcite generates ITEM($0, 1) , where $0 is ref to array 
> column 
> from Scan and 1 is index literal. Current flattener generates two field acess 
> expressions ITEM($0, 1).a, ITEM($0, 1).b but dont take into account 
> that ITEM($0, 1).b returns struct which also should be flattened. 
> 3) In some cases applying of ITEM after flattenning is no longer possible. 
> For example, consider column with type 
> STRUCT>. User requests column['b'] in query and 
> Calcite creates ITEM($0,'b'). 
> After flattenning Scan is covered by LogicalProject($0.a, $0.b.x, $0.b.y) and 
> the old projection ITEM($0,'b') can't 
> be applied anymore. So now it should be converted to refer only subset of 
> fields ($1,$2) from flattening project.
> UPDATES IN EXPECTED TEST RESULTS:
> --
> TEST CASE: SqlToRelConverterTest.testNestedColumnType()
> {code}
> select empa.home_address.zip from sales.emp_address empa where 
> empa.home_address.city = 'abc'
> {code}
> OLD RESULT: 
> {code}
> LogicalProject(ZIP=[$4])
>   LogicalFilter(condition=[=($3, 'abc':VARCHAR(20))])
> LogicalProject(EMPNO=[$0], STREET=[$1.STREET], CITY=[$1.CITY], 
> ZIP=[$1.ZIP], STATE=[$1.STATE], STREET5=[$2.STREET], CITY6=[$2.CITY], 
> ZIP7=[$2.ZIP], STATE8=[$2.STATE])
>   LogicalTableScan(table=[[CATALOG, SALES, EMP_ADDRESS]])
> {code}
> 1. Above in logical filter, condition references to field $3 which is 
> ZIP=[$1.ZIP] field from previous project, 
>however in original query filtering should be done by CITY field. 
> 2. Also the top level project references to $4 field, which is 
> STATE=[$1.STATE] field from project, but original 
>query requested ZIP field.
>  
> UPDATED RESULT:
> {code}
> LogicalProject(ZIP=[$3])
>   LogicalFilter(condition=[=($2, 'abc')])
> LogicalProject(EMPNO=[$0], STREET=[$1.STREET], CITY=[$1.CITY], 
> ZIP=[$1.ZIP], STATE=[$1.STATE], STREET5=[$2.STREET], CITY6=[$2.CITY], 
> ZIP7=[$2.ZIP], STATE8=[$2.STATE])
>   LogicalTableScan(table=[[CATALOG, SALES, EMP_ADDRESS]])
> {code}
> --
> TEST CASE: SqlToRelConverterTest.testStructTypeAlias()
> {code}
> select t.r AS myRow from (select row(row(1)) r from dept) t 
> {code}
> OLD RESULT: 
> {code}
> LogicalProject(MYROW$$0$$0=[1])
>   LogicalTableScan(table=[[CATALOG, SALES, DEPT]])
> {code}
> 1. Inside the subselect of row(row(1)) type of returned column is 
> RecordType(RecordType(INTEGER EXPR$0) EXPR$0),
>but the top level project uses flattened expression and returns to user 
> literal 1 with type RecordType(INTEGER MYROW$$0$$0), 
>although the type doesn't match type returned by row(row(1)) expression. 
> 2. Also it's suspicious that caller expects returned column to have name 
> 'myRow' but gets 'MYROW$$0$$0'. 
>   
> UPDATED RESULT:
> {code}
> LogicalProject(MYROW=[ROW(ROW($0))])
>   LogicalProject(MYROW$$0$$0=[1])
> LogicalTableScan(table=[[CATALOG, SALES, DEPT]])
> {code}
> --
> TEST CASE: SqlToRelConverterTest.testFlattenRecords()
> {code}
> select employees[1] from dept_nested 
> {code}
> OLD RESULT:
> {code}
> LogicalProject(EXPR$0=[$0])
>   LogicalProject(EXPR$0$0=[ITEM($3, 1).EMPNO], EXPR$0$1=[ITEM($3, 1).ENAME], 
> EXPR$0$2=[ITEM($3, 

[jira] [Updated] (CALCITE-3272) TUMBLE Table Value Function

2019-08-21 Thread Rui Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Wang updated CALCITE-3272:
--
Description: 
Define a builtin TVF: Tumble (data , timecol , dur, [ offset ])

The return value of Tumble is a relation that includes all columns of data as 
well as additional event time columns wstart and wend.

Examples of TUMBLE TVF are (from https://s.apache.org/streaming-beam-sql ):

8:21> SELECT * FROM Bid;

--
| bidtime | price | item |
--
| 8:07| $2| A|
| 8:11| $3| B|
| 8:05| $4| C|
| 8:09| $5| D|
| 8:13| $1| E|
| 8:17| $6| F|
--

8:21> SELECT *
  FROM Tumble (
data=> TABLE Bid ,
timecol => DESCRIPTOR ( bidtime ) ,
dur => INTERVAL '10' MINUTES ,
offset  => INTERVAL '0' MINUTES );

--
| wstart | wend | bidtime | price | item |
--
| 8:00   | 8:10 | 8:07| $2| A|
| 8:10   | 8:20 | 8:11| $3| B|
| 8:00   | 8:10 | 8:05| $4| C|
| 8:00   | 8:10 | 8:09| $5| D|
| 8:10   | 8:20 | 8:13| $1| E|
| 8:10   | 8:20 | 8:17| $6| F|
--

8:21> SELECT MAX ( wstart ) , wend , SUM ( price )
  FROM Tumble (
data=> TABLE ( Bid ) ,
timecol => DESCRIPTOR ( bidtime ) ,
dur => INTERVAL '10 ' MINUTES )
  GROUP BY wend;

-
| wstart | wend | price |
-
| 8:00   | 8:10 | $11   |
| 8:10   | 8:20 | $10   |
-


  was:
Define a builtin TVF: Tumble (data , timecol , dur, [ offset ])

The return value of Tumble is a relation that includes all columns of data as 
well as additional event time columns wstart and wend.

Examples of TUMBLE TVF are (from https://s.apache.org/streaming-beam-sql):

8:21> SELECT * FROM Bid;

--
| bidtime | price | item |
--
| 8:07| $2| A|
| 8:11| $3| B|
| 8:05| $4| C|
| 8:09| $5| D|
| 8:13| $1| E|
| 8:17| $6| F|
--

8:21> SELECT *
  FROM Tumble (
data=> TABLE Bid ,
timecol => DESCRIPTOR ( bidtime ) ,
dur => INTERVAL '10' MINUTES ,
offset  => INTERVAL '0' MINUTES );

--
| wstart | wend | bidtime | price | item |
--
| 8:00   | 8:10 | 8:07| $2| A|
| 8:10   | 8:20 | 8:11| $3| B|
| 8:00   | 8:10 | 8:05| $4| C|
| 8:00   | 8:10 | 8:09| $5| D|
| 8:10   | 8:20 | 8:13| $1| E|
| 8:10   | 8:20 | 8:17| $6| F|
--

8:21> SELECT MAX ( wstart ) , wend , SUM ( price )
  FROM Tumble (
data=> TABLE ( Bid ) ,
timecol => DESCRIPTOR ( bidtime ) ,
dur => INTERVAL '10 ' MINUTES )
  GROUP BY wend;

-
| wstart | wend | price |
-
| 8:00   | 8:10 | $11   |
| 8:10   | 8:20 | $10   |
-



> TUMBLE Table Value Function
> ---
>
> Key: CALCITE-3272
> URL: https://issues.apache.org/jira/browse/CALCITE-3272
> Project: Calcite
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>
> Define a builtin TVF: Tumble (data , timecol , dur, [ offset ])
> The return value of Tumble is a relation that includes all columns of data as 
> well as additional event time columns wstart and wend.
> Examples of TUMBLE TVF are (from https://s.apache.org/streaming-beam-sql ):
> 8:21> SELECT * FROM Bid;
> --
> | bidtime | price | item |
> --
> | 8:07| $2| A|
> | 8:11| $3| B|
> | 8:05| $4| C|
> | 8:09| $5| D|
> | 8:13| $1| E|
> | 8:17| $6| F|
> --
> 8:21> SELECT *
>   FROM Tumble (
> data=> TABLE Bid ,
> timecol => DESCRIPTOR ( bidtime ) ,
> dur => INTERVAL '10' MINUTES ,
> offset  => INTERVAL '0' MINUTES );
> --
> | wstart | wend | bidtime | price | item |
> --
> | 8:00   | 8:10 | 8:07| $2| A|
> | 8:10   | 8:20 | 8:11| $3| B|
> | 8:00   | 8:10 | 8:05| $4| C|
> | 8:00   | 8:10 | 8:09| $5| D|
> | 8:10   | 8:20 | 8:13| $1| E|
> | 8:10   | 8:20 | 8:17| $6| F|
> --
> 8:21> SELECT MAX ( wstart ) , wend , 

[jira] [Assigned] (CALCITE-3272) TUMBLE Table Value Function

2019-08-21 Thread Rui Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Wang reassigned CALCITE-3272:
-

Assignee: Rui Wang

> TUMBLE Table Value Function
> ---
>
> Key: CALCITE-3272
> URL: https://issues.apache.org/jira/browse/CALCITE-3272
> Project: Calcite
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>
> Define a builtin TVF: Tumble (data , timecol , dur, [ offset ])
> The return value of Tumble is a relation that includes all columns of data as 
> well as additional event time columns wstart and wend.
> Examples of TUMBLE TVF are (from https://s.apache.org/streaming-beam-sql):
> 8:21> SELECT * FROM Bid;
> --
> | bidtime | price | item |
> --
> | 8:07| $2| A|
> | 8:11| $3| B|
> | 8:05| $4| C|
> | 8:09| $5| D|
> | 8:13| $1| E|
> | 8:17| $6| F|
> --
> 8:21> SELECT *
>   FROM Tumble (
> data=> TABLE Bid ,
> timecol => DESCRIPTOR ( bidtime ) ,
> dur => INTERVAL '10' MINUTES ,
> offset  => INTERVAL '0' MINUTES );
> --
> | wstart | wend | bidtime | price | item |
> --
> | 8:00   | 8:10 | 8:07| $2| A|
> | 8:10   | 8:20 | 8:11| $3| B|
> | 8:00   | 8:10 | 8:05| $4| C|
> | 8:00   | 8:10 | 8:09| $5| D|
> | 8:10   | 8:20 | 8:13| $1| E|
> | 8:10   | 8:20 | 8:17| $6| F|
> --
> 8:21> SELECT MAX ( wstart ) , wend , SUM ( price )
>   FROM Tumble (
> data=> TABLE ( Bid ) ,
> timecol => DESCRIPTOR ( bidtime ) ,
> dur => INTERVAL '10 ' MINUTES )
>   GROUP BY wend;
> -
> | wstart | wend | price |
> -
> | 8:00   | 8:10 | $11   |
> | 8:10   | 8:20 | $10   |
> -



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3281) Support mixed Primitive types for BinaryExpression evaluate method.

2019-08-21 Thread Wang Yanlin (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang Yanlin updated CALCITE-3281:
-
Description: 
Currently, the value of [expression0 and expression1 
|https://github.com/apache/calcite/blob/master/linq4j/src/main/java/org/apache/calcite/linq4j/tree/BinaryExpression.java#L26]
 must be the same type.

Otherwise, when evaluate, we will get a ClassCastException, something like this
{code:java}
java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long

at 
org.apache.calcite.linq4j.tree.BinaryExpression.evaluate(BinaryExpression.java:75)
at 
org.apache.calcite.linq4j.tree.GotoStatement.evaluate(GotoStatement.java:97)
at 
org.apache.calcite.linq4j.tree.BlockStatement.evaluate(BlockStatement.java:83)
at org.apache.calcite.linq4j.tree.Evaluator.evaluate(Evaluator.java:55)
at 
org.apache.calcite.linq4j.tree.FunctionExpression.lambda$compile$0(FunctionExpression.java:87)
at 
org.apache.calcite.linq4j.test.ExpressionTest.testLambdaCallsBinaryOpMixType(ExpressionTest.java:349)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
{code}

Actually, we can do something to support mixed primitive types in 
BinaryExpression.

  was:
{code:java}
// code placeholder
{code}
Currently, the value of [expression0 |http://www.baidu.com/] and [expression1|] 
should be


>  Support mixed Primitive types for BinaryExpression evaluate method.
> 
>
> Key: CALCITE-3281
> URL: https://issues.apache.org/jira/browse/CALCITE-3281
> Project: Calcite
>  Issue Type: Bug
>Reporter: Wang Yanlin
>Priority: Major
>
> Currently, the value of [expression0 and expression1 
> |https://github.com/apache/calcite/blob/master/linq4j/src/main/java/org/apache/calcite/linq4j/tree/BinaryExpression.java#L26]
>  must be the same type.
> Otherwise, when evaluate, we will get a ClassCastException, something like 
> this
> {code:java}
> java.lang.ClassCastException: java.lang.Integer cannot be cast to 
> java.lang.Long
>   at 
> org.apache.calcite.linq4j.tree.BinaryExpression.evaluate(BinaryExpression.java:75)
>   at 
> org.apache.calcite.linq4j.tree.GotoStatement.evaluate(GotoStatement.java:97)
>   at 
> org.apache.calcite.linq4j.tree.BlockStatement.evaluate(BlockStatement.java:83)
>   at org.apache.calcite.linq4j.tree.Evaluator.evaluate(Evaluator.java:55)
>   at 
> org.apache.calcite.linq4j.tree.FunctionExpression.lambda$compile$0(FunctionExpression.java:87)
>   at 
> org.apache.calcite.linq4j.test.ExpressionTest.testLambdaCallsBinaryOpMixType(ExpressionTest.java:349)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (CALCITE-2394) Avatica applies calendar offset to timestamps when they should remain unchanged

2019-08-21 Thread Kenneth Knowles (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912895#comment-16912895
 ] 

Kenneth Knowles commented on CALCITE-2394:
--

FWIW here I've recently realized that Beam SQL is probably backwards. We have 
been mapping an absolute Joda-style instant to the Calcite type TIMESTAMP which 
is pretty explicitly wrong. We probably need to decide between `TIMESTAMP WITH 
TIMEZONE` to make it an absolute time (with extraneous metadata) versus 
`TIMESTAMP WITH LOCAL TIMEZONE` which to be honest I don't really understand.

> Avatica applies calendar offset to timestamps when they should remain 
> unchanged
> ---
>
> Key: CALCITE-2394
> URL: https://issues.apache.org/jira/browse/CALCITE-2394
> Project: Calcite
>  Issue Type: Bug
>  Components: avatica
>Reporter: Kenneth Knowles
>Assignee: Kenneth Knowles
>Priority: Major
>
> This code converts a millis-since-epoch value to a timestamp in three 
> different accessors:
> {code}
> class AbstractCursor {
>   ...
>   static Timestamp longToTimestamp(long v, Calendar calendar) {
> if (calendar != null) {
>   v -= calendar.getTimeZone().getOffset(v);
> }
> return new Timestamp(v);
>   }
> }
> {code}
> But {{new Timestamp(millis)}} always accepts millis-since-epoch in GMT.
> The use in {{DateFromNumberAccessor}} is probably OK: it fabricates 
> millis-since-epoch from a date, so applying the offset is appropriate to hit 
> midnight in that locale.
> But both {{TimeFromNumberAccessor}} and {{TimestampFromNumberAccessor}} 
> should leave the millis absolute.
> This manifests as timestamp actual values being shifted by the current locale 
> (in addition to later display adjustments).



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (CALCITE-3223) Non-RexInputRef may fails the matching of FilterToProjectUnifyRule during 'invert' by mistake.

2019-08-21 Thread Haisheng Yuan (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haisheng Yuan resolved CALCITE-3223.

Fix Version/s: 1.21.0
   Resolution: Fixed

Fixed in 
https://github.com/apache/calcite/commit/729446005617059c0ce9fef4068087e4ca9ca139,
 thanks for the PR, [~jinxing6...@126.com]!

> Non-RexInputRef may fails the matching of FilterToProjectUnifyRule during 
> 'invert' by mistake.
> --
>
> Key: CALCITE-3223
> URL: https://issues.apache.org/jira/browse/CALCITE-3223
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: jin xing
>Assignee: jin xing
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In current code of 
> {{FilterToProjectUnifyRule::invert}}(https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/plan/SubstitutionVisitor.java#L1124),
>   the implementation 
> 1. Fails the matching when there is Non-RexInputRef  in the projects
> 2. Didn't check if all items of {{exprList}} has already been set correctly.
> As a result below tests  fails.
> {code:java}
>   @Test public void testFilterToProject0() {
> String union =
> "select * from \"emps\" where \"empid\" > 300\n"
> + "union all select * from \"emps\" where \"empid\" < 200";
> String mv = "select *, \"empid\" * 2 from (" + union + ")";
> String query = "select * from (" + union + ") where (\"empid\" * 2) > 3";
> checkMaterialize(mv, query);
>   }
>   @Test public void testFilterToProject1() {
> String agg =
> "select \"deptno\", count(*) as \"c\", sum(\"salary\") as \"s\"\n"
> + "from \"emps\" group by \"deptno\"";
> String mv = "select \"c\", \"s\", \"s\" from (" + agg + ")";
> String query = "select * from (" + agg + ") where (\"s\" * 0.8) > 1";
> checkNoMaterialize(mv, query, HR_FKUK_MODEL);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2973) Allow theta joins that have equi conditions to be executed using a hash join algorithm

2019-08-21 Thread Haisheng Yuan (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912893#comment-16912893
 ] 

Haisheng Yuan commented on CALCITE-2973:


Sure, will do.

> Allow theta joins that have equi conditions to be executed using a hash join 
> algorithm
> --
>
> Key: CALCITE-2973
> URL: https://issues.apache.org/jira/browse/CALCITE-2973
> Project: Calcite
>  Issue Type: New Feature
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Lai Zhou
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Now the EnumerableMergeJoinRule only supports an inner and equi join.
> If users make a theta-join query  for a large dataset (such as 1*1), 
> the nested-loop join process will take dozens of time than the sort-merge 
> join process .
> So if we can apply merge-join or hash-join rule for a theta join, it will 
> improve the performance greatly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3115) Cannot add JdbcRules which have different JdbcConvention to same VolcanoPlanner's RuleSet.

2019-08-21 Thread Danny Chan (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912928#comment-16912928
 ] 

Danny Chan commented on CALCITE-3115:
-

Reviewing now ..

> Cannot add JdbcRules which have different JdbcConvention to same 
> VolcanoPlanner's RuleSet.
> --
>
> Key: CALCITE-3115
> URL: https://issues.apache.org/jira/browse/CALCITE-3115
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.19.0
>Reporter: TANG Wen-hui
>Assignee: Igor Guzenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> When we use Calcite via JDBC to run a sql which involves two difference jdbc 
> schema:
> {code:java}
> select * from (select "a",max("b") as max_b, sum("c") as sum_c from 
> "test"."temp" where "d" > 10 or "b" <> 'hello' group by "a", "e", "f" having 
> "a" > 100 and max("b") < 20 limit 10) t  union select "a", "b","c" from 
> "test2"."temp2" group by "a","b","c" 
> {code}
> the sql get a plan like that:
> {code:java}
> EnumerableUnion(all=[false])
>   JdbcToEnumerableConverter
> JdbcProject(a=[$0], MAX_B=[$3], SUM_C=[$4])
>   JdbcSort(fetch=[10])
> JdbcFilter(condition=[<(CAST($3):BIGINT, 20)])
>   JdbcAggregate(group=[{0, 4, 5}], MAX_B=[MAX($1)], SUM_C=[SUM($2)])
> JdbcFilter(condition=[AND(OR(>($3, 10), <>($1, 'hello')), >($0, 
> 100))])
>   JdbcTableScan(table=[[test, temp]])
>   EnumerableAggregate(group=[{0, 1, 2}])
> JdbcToEnumerableConverter
>   JdbcTableScan(table=[[test2, temp2]])
> {code}
> And the EnumerableAggregate for table test2.temp2 cannot be converted to 
> JdbcAggregate.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (CALCITE-3276) Add MV rules to match Join on compensating Project(s)

2019-08-21 Thread jin xing (Jira)
jin xing created CALCITE-3276:
-

 Summary: Add MV rules to match Join on compensating Project(s)
 Key: CALCITE-3276
 URL: https://issues.apache.org/jira/browse/CALCITE-3276
 Project: Calcite
  Issue Type: Sub-task
  Components: core
Reporter: jin xing






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2302) Implicit type cast support

2019-08-21 Thread Danny Chan (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911987#comment-16911987
 ] 

Danny Chan commented on CALCITE-2302:
-

[~julianhyde] Personally i propose to add this policy to SqlConformance, 
because it is very about the execution behavior of a SqlDialect, the 
RelDataTypeSystem defines some type deriving behaviors of the entire type 
system in methods like "deriveXXX", but strictly to say, "two integers division 
returns what kind of data" not only is relative with "what sql type we should 
return" but "what data we should return".

We can say that "9/2 should return double", then 4.5 and 4.0 can both be seen 
as double type. For an implicit type coercion, what we really changed is the 
runtime/execution behavior, not only the type inferring.

Another reason is that RelDataTypeSystem is about Calcite type system behaviors 
which are not designed pluggable for all kinds of SQL dialects.

> Implicit type cast support
> --
>
> Key: CALCITE-2302
> URL: https://issues.apache.org/jira/browse/CALCITE-2302
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.17.0
>Reporter: Danny Chan
>Assignee: Danny Chan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Now many DBs have support implicit type cast, eg: SqlServer, Oracle, Hive.
> Implicit type cast is an useful function for many cases, So we should support 
> this.
> I checkout Calcite code and found that:
>  # Now we use a validator to validate our operands types[ through kinds of 
> namespaces and scopes ]
>  # Most of the validations will finally goes to
> {code:java}
> SqlOperator.validateOperands
> {code}
>  # which will use validation logic defined in corresponding 
> SqlOperandTypeChecker
> What i'm confused about is where should i put the implicit type cast logic 
> in? I figured out 2 ways:
>  # Supply a tool class/rules to add casts into a parsed SqlNode tree which 
> will then go through the validation logic later on.
>  # Unleash the validation logic in kinds of SqlOperandTypeChecker, then 
> modify the RelNode/RexNodes tree converted from a validated SqlNode tree to 
> add in casts through custom RelOptRules.
> So guys, which of the 2 ways should i go, or if there are better way to do 
> this?
> I need your help.
>  
> Updated 18-05-30:
> Hi guys, i have made a PR in 
> [CALCITE-2302|https://github.com/apache/calcite/pull/706]
> This is design doc: [Calcite Implicit Type Cast 
> Design|https://docs.google.com/document/d/1g2RUnLXyp_LjUlO-wbblKuP5hqEu3a_2Mt2k4dh6RwU/edit?usp=sharing].
> This is the conversion types mapping: [Conversion Types 
> Mapping|https://docs.google.com/spreadsheets/d/1GhleX5h5W8-kJKh7NMJ4vtoE78pwfaZRJl88ULX_MgU/edit?usp=sharing].
> I really appreciate your suggestions, thx.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3271) TVF windowing and EMIT syntax support in Calcite

2019-08-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911973#comment-16911973
 ] 

Julian Hyde commented on CALCITE-3271:
--

"TVF" is jargon, and non-standard jargon at that. (I had to look it up, and I 
co-authored the paper.) Please expand/clarify.

> TVF windowing and EMIT syntax support in Calcite
> 
>
> Key: CALCITE-3271
> URL: https://issues.apache.org/jira/browse/CALCITE-3271
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.20.0
>Reporter: Danny Chan
>Assignee: Danny Chan
>Priority: Major
>
> Copied from the mailing list:
> Calcite has not implemented the syntax in that paper. I would support an 
> effort to add it (unsurprising, since I am a co-author of that paper).
> EMIT STREAM is equivalent to the current SELECT STREAM syntax.
> There is no equivalent in current Calcite of the EMIT AFTER WATERMARK, or 
> EMIT STREAM AFTER DELAY.
> HOP, TUMBLE and SESSION are supported in Calcite’s SQL parser, but following 
> the paper would be replaced with a table function call. We could need to add 
> HOP, TUMBLE and SESSION table functions. We would also need to make the 
> system aware of how watermarks flow through these table functions (an area 
> that the paper does not go into).
> Julian



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3256) Add ProjectOnProjectToProjectUnifyRule for materialization matching.

2019-08-21 Thread jin xing (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated CALCITE-3256:
--
Priority: Major  (was: Minor)

> Add ProjectOnProjectToProjectUnifyRule for materialization matching.
> 
>
> Key: CALCITE-3256
> URL: https://issues.apache.org/jira/browse/CALCITE-3256
> Project: Calcite
>  Issue Type: Sub-task
>  Components: core
>Reporter: jin xing
>Assignee: jin xing
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In current code, below matching fails:
> {code:java}
> MV:
> select deptno, sum(salary) + 2, sum(commission)
> from emps
> group by deptno
> Query:
> select deptno, sum(salary) + 2
> from emps
> group by deptno
> {code}
> The reason is that --  after matching of the Aggregates, a compensating 
> Project is added, but afterwards matching fails to handle it.
> This issue proposes to add a rule that match when query and target are both 
> Project and query has a compensating Project as a child. After this issue 
> below case can be handled:
> {code:java}
> Query:
> Project(projects: [$0, +($1, 2)])
>   Project(projects: [$1, $3, $4])
> Rel-A
> Target:  
> Project(projects: [$1, +($3, 2)])
>   Rel-A
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3271) TVF windowing and EMIT syntax support in Calcite

2019-08-21 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911978#comment-16911978
 ] 

Jark Wu commented on CALCITE-3271:
--

EMIT is very helpful to early fire windows and trigger on late records. We also 
have implemented a dialect in Blink SQL[1] and it is great to have EMIT syntax 
in Calcite. Looking forward to it. And glad to help if possible. 

[1]: http://blink.flink-china.org/dev/table/sql.html#emit-strategy

> TVF windowing and EMIT syntax support in Calcite
> 
>
> Key: CALCITE-3271
> URL: https://issues.apache.org/jira/browse/CALCITE-3271
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.20.0
>Reporter: Danny Chan
>Assignee: Danny Chan
>Priority: Major
>
> Copied from the mailing list:
> Calcite has not implemented the syntax in that paper. I would support an 
> effort to add it (unsurprising, since I am a co-author of that paper).
> EMIT STREAM is equivalent to the current SELECT STREAM syntax.
> There is no equivalent in current Calcite of the EMIT AFTER WATERMARK, or 
> EMIT STREAM AFTER DELAY.
> HOP, TUMBLE and SESSION are supported in Calcite’s SQL parser, but following 
> the paper would be replaced with a table function call. We could need to add 
> HOP, TUMBLE and SESSION table functions. We would also need to make the 
> system aware of how watermarks flow through these table functions (an area 
> that the paper does not go into).
> Julian



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (CALCITE-3275) add nil checks to prevent panics in error parsing

2019-08-21 Thread Tino Rusch (Jira)
Tino Rusch created CALCITE-3275:
---

 Summary: add nil checks to prevent panics in error parsing
 Key: CALCITE-3275
 URL: https://issues.apache.org/jira/browse/CALCITE-3275
 Project: Calcite
  Issue Type: Improvement
  Components: avatica-go
Reporter: Tino Rusch
Assignee: Francis Chuang


We need nil-checks for the case that the server doesnt include RpcMetadata in 
the error responses.

 

PR is already ready, this ticket is for reference

 

https://github.com/apache/calcite-avatica-go/pull/47



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2302) Implicit type cast support

2019-08-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16911967#comment-16911967
 ] 

Julian Hyde commented on CALCITE-2302:
--

[~danny0405] Not sure that the policy for deriving the return type of divide 
should be in conformance. (Maybe RelDataTypeSystem?) Nor whether it should be 
done as part of this change.

> Implicit type cast support
> --
>
> Key: CALCITE-2302
> URL: https://issues.apache.org/jira/browse/CALCITE-2302
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.17.0
>Reporter: Danny Chan
>Assignee: Danny Chan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Now many DBs have support implicit type cast, eg: SqlServer, Oracle, Hive.
> Implicit type cast is an useful function for many cases, So we should support 
> this.
> I checkout Calcite code and found that:
>  # Now we use a validator to validate our operands types[ through kinds of 
> namespaces and scopes ]
>  # Most of the validations will finally goes to
> {code:java}
> SqlOperator.validateOperands
> {code}
>  # which will use validation logic defined in corresponding 
> SqlOperandTypeChecker
> What i'm confused about is where should i put the implicit type cast logic 
> in? I figured out 2 ways:
>  # Supply a tool class/rules to add casts into a parsed SqlNode tree which 
> will then go through the validation logic later on.
>  # Unleash the validation logic in kinds of SqlOperandTypeChecker, then 
> modify the RelNode/RexNodes tree converted from a validated SqlNode tree to 
> add in casts through custom RelOptRules.
> So guys, which of the 2 ways should i go, or if there are better way to do 
> this?
> I need your help.
>  
> Updated 18-05-30:
> Hi guys, i have made a PR in 
> [CALCITE-2302|https://github.com/apache/calcite/pull/706]
> This is design doc: [Calcite Implicit Type Cast 
> Design|https://docs.google.com/document/d/1g2RUnLXyp_LjUlO-wbblKuP5hqEu3a_2Mt2k4dh6RwU/edit?usp=sharing].
> This is the conversion types mapping: [Conversion Types 
> Mapping|https://docs.google.com/spreadsheets/d/1GhleX5h5W8-kJKh7NMJ4vtoE78pwfaZRJl88ULX_MgU/edit?usp=sharing].
> I really appreciate your suggestions, thx.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3276) Add MV rules to match Join on compensating Project(s)

2019-08-21 Thread jin xing (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated CALCITE-3276:
--
Description: This issue proposes to handle cases where query and target are 
{{MutableJoin}}s and query has compensating {{Project}}s as children.  (was: 
This issue proposes to handle cases where query and target are {{MutableJoin}}s 
and query has compensating {{Project}}(s) as children.)

> Add MV rules to match Join on compensating Project(s)
> -
>
> Key: CALCITE-3276
> URL: https://issues.apache.org/jira/browse/CALCITE-3276
> Project: Calcite
>  Issue Type: Sub-task
>  Components: core
>Reporter: jin xing
>Priority: Major
>
> This issue proposes to handle cases where query and target are 
> {{MutableJoin}}s and query has compensating {{Project}}s as children.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3276) Add MV rules to match Join on compensating Project(s)

2019-08-21 Thread jin xing (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated CALCITE-3276:
--
Description: This issue proposes to handle cases where query and target are 
{{MutableJoin}}s and query has compensating {{Project}}(s) as children.

> Add MV rules to match Join on compensating Project(s)
> -
>
> Key: CALCITE-3276
> URL: https://issues.apache.org/jira/browse/CALCITE-3276
> Project: Calcite
>  Issue Type: Sub-task
>  Components: core
>Reporter: jin xing
>Priority: Major
>
> This issue proposes to handle cases where query and target are 
> {{MutableJoin}}s and query has compensating {{Project}}(s) as children.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3276) Add MV rules to match Join on compensating Project(s)

2019-08-21 Thread jin xing (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing updated CALCITE-3276:
--
Description: This issue proposes to handle cases where query and target are 
*Join*s and query has compensating *Project*s as children.  (was: This issue 
proposes to handle cases where query and target are {{MutableJoin}}s and query 
has compensating {{Project}}s as children.)

> Add MV rules to match Join on compensating Project(s)
> -
>
> Key: CALCITE-3276
> URL: https://issues.apache.org/jira/browse/CALCITE-3276
> Project: Calcite
>  Issue Type: Sub-task
>  Components: core
>Reporter: jin xing
>Priority: Major
>
> This issue proposes to handle cases where query and target are *Join*s and 
> query has compensating *Project*s as children.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2166) Cumulative cost of RelSubset.best RelNode is increased after calling RelSubset.propagateCostImprovements() for input RelNodes

2019-08-21 Thread Volodymyr Vysotskyi (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912124#comment-16912124
 ] 

Volodymyr Vysotskyi commented on CALCITE-2166:
--

Thanks, Xiening Dai, for moving this issue forward! I've also considered 
similar options, and here are my thoughts regarding them:
1 & 2: Currently we store the best cost value to avoid recalculating it every 
time when a new rel node is added to the rel subset. For the case when we have 
to recalculate its cost, we also have to clear rel metadata cache for current 
rel node, and for the case when something is changed, also recalculate parent 
rel nodes costs.

I agree that #3 is too complex, and I don't have other solutions for this issue.

But in either case, it would be good if 1 or 2 option will help to fix this 
issue partially without introducing significant performance degradation.

> Cumulative cost of RelSubset.best RelNode is increased after calling 
> RelSubset.propagateCostImprovements() for input RelNodes
> -
>
> Key: CALCITE-2166
> URL: https://issues.apache.org/jira/browse/CALCITE-2166
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.15.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Danny Chan
>Priority: Critical
>
> After calling {{RelSubset.propagateCostImprovements()}} cumulative cost of 
> {{RelSubset.best}} {{RelNode}} may be increased due to the increase of the 
> non-cumulative cost caused by changing of input best {{RelNode}}.
> To observe this issue, add this code:
> {code:java}
>   if (subset.best != null) {
> RelOptCost bestCost = getCost(subset.best, 
> RelMetadataQuery.instance());
> if (!subset.bestCost.equals(bestCost)) {
>   throw new AssertionError(
> "relSubset [" + subset.getDescription()
>   + "] has wrong best cost "
>   + subset.bestCost + ". Correct cost is " + bestCost);
> }
>   }
> {code}
> into {{VolcanoPlanner.validate()}} method (line 907).
> List of unit tests which fail with this check:
> {noformat}
> Failed tests: 
>   
> MaterializationTest.testJoinMaterializationUKFK9:1823->checkMaterialize:198->checkMaterialize:205->checkThatMaterialize:233
>  relSubset [rel#226287:Subset#8.ENUMERABLE.[]] has wrong best cost {221.5 
> rows, 128.25 cpu, 0.0 io}. Correct cost is {233.0 rows, 178.0 cpu, 0.0 io}
>   ScannableTableTest.testPFPushDownProjectFilterAggregateNested:279 relSubset 
> [rel#12950:Subset#5.ENUMERABLE.[]] has wrong best cost {63.8 rows, 62.308 
> cpu, 0.0 io}. Correct cost is {70.4 rows, 60.404 cpu, 0.0 io}
>   ScannableTableTest.testPFTableRefusesFilterCooperative:221 relSubset 
> [rel#13382:Subset#2.ENUMERABLE.[]] has wrong best cost {81.0 rows, 181.01 
> cpu, 0.0 io}. Correct cost is {150.5 rows, 250.505 cpu, 0.0 io}
>   ScannableTableTest.testProjectableFilterableCooperative:148 relSubset 
> [rel#13611:Subset#2.ENUMERABLE.[]] has wrong best cost {81.0 rows, 181.01 
> cpu, 0.0 io}. Correct cost is {150.5 rows, 250.505 cpu, 0.0 io}
>   ScannableTableTest.testProjectableFilterableNonCooperative:165 relSubset 
> [rel#13754:Subset#2.ENUMERABLE.[]] has wrong best cost {81.0 rows, 181.01 
> cpu, 0.0 io}. Correct cost is {150.5 rows, 250.505 cpu, 0.0 io}
>   FrameworksTest.testUpdate:336->executeQuery:367 relSubset 
> [rel#22533:Subset#2.ENUMERABLE.any] has wrong best cost {19.5 rows, 37.75 
> cpu, 0.0 io}. Correct cost is {22.575 rows, 52.58 cpu, 0.0 io}
> {noformat}
> For the test {{MaterializationTest.testJoinMaterializationUKFK9}} initial 
> best plan was:
> {noformat}
> EnumerableProject(empid0=[$5], empid00=[$5], deptno0=[$7]): rowcount = 15.0, 
> cumulative cost = {15.0 rows, 45.0 cpu, 0.0 io}, id = 3989
>   EnumerableJoin(subset=[rel#3988:Subset#34.ENUMERABLE.[]], condition=[=($1, 
> $7)], joinType=[inner]): rowcount = 15.0, cumulative cost = {116.0 rows, 0.0 
> cpu, 0.0 io}, id = 4797
> EnumerableFilter(subset=[rel#4274:Subset#47.ENUMERABLE.[0]], 
> condition=[=(CAST($2):VARCHAR CHARACTER SET "ISO-8859-1" COLLATE 
> "ISO-8859-1$en_US$primary", 'Bill')]): rowcount = 1.0, cumulative cost = {1.0 
> rows, 1.0 cpu, 0.0 io}, id = 16522
>   EnumerableTableScan(subset=[rel#158:Subset#11.ENUMERABLE.[0]], 
> table=[[hr, m0]]): rowcount = 1.0, cumulative cost = {0.0 rows, 1.0 cpu, 0.0 
> io}, id = 79
> EnumerableTableScan(subset=[rel#115:Subset#5.ENUMERABLE.[]], table=[[hr, 
> depts]]): rowcount = 100.0, cumulative cost = {100.0 rows, 101.0 cpu, 0.0 
> io}, id = 62
> {noformat}
> Its cumulative cost is \{221.5 rows, 123.75 cpu, 0.0 io}
> After applying some rules it became:
> {noformat}
> EnumerableProject(empid0=[$3], empid00=[$3], 

[jira] [Commented] (CALCITE-3246) RelJsonReader throws NullPointerException while deserializing from JSON a call to a user-defined function (UDF)

2019-08-21 Thread Wang Yanlin (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912199#comment-16912199
 ] 

Wang Yanlin commented on CALCITE-3246:
--

OK. And, any suggestion for update?

> RelJsonReader throws NullPointerException while deserializing from JSON a 
> call to a user-defined function (UDF)
> ---
>
> Key: CALCITE-3246
> URL: https://issues.apache.org/jira/browse/CALCITE-3246
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Wang Yanlin
>Assignee: Chunwei Lei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> when deserializing of logical rel with udf operator, NPE occurs.
> The exception stacktrace as follow.
> {code:java}
> java.lang.RuntimeException: java.lang.NullPointerException
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:181)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:125)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:143)
>   at org.apache.calcite.plan.RelWriterTest.testUdf(RelWriterTest.java:598)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: java.lang.NullPointerException
>   at java.util.Objects.requireNonNull(Objects.java:203)
>   at org.apache.calcite.rex.RexCall.(RexCall.java:83)
>   at org.apache.calcite.rex.RexBuilder.makeCall(RexBuilder.java:237)
>   at org.apache.calcite.rel.externalize.RelJson.toRex(RelJson.java:485)
>   at 
> org.apache.calcite.rel.externalize.RelJsonReader$2.getExpressionList(RelJsonReader.java:204)
>   at org.apache.calcite.rel.core.Project.(Project.java:100)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.(LogicalProject.java:88)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.rel.externalize.RelJsonReader.readRel(RelJsonReader.java:261)
>   at 
> org.apache.calcite.rel.externalize.RelJsonReader.readRels(RelJsonReader.java:91)
>   at 
> org.apache.calcite.rel.externalize.RelJsonReader.read(RelJsonReader.java:85)
>   at 
> org.apache.calcite.plan.RelWriterTest.lambda$testUdf$7(RelWriterTest.java:603)
>   at 
> org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:130)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:915)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:179)
>   ... 25 more
> {code}



--
This message was sent by 

[jira] [Updated] (CALCITE-3276) Add MV rules to match Join on compensating Project(s)

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CALCITE-3276:

Labels: pull-request-available  (was: )

> Add MV rules to match Join on compensating Project(s)
> -
>
> Key: CALCITE-3276
> URL: https://issues.apache.org/jira/browse/CALCITE-3276
> Project: Calcite
>  Issue Type: Sub-task
>  Components: core
>Reporter: jin xing
>Priority: Major
>  Labels: pull-request-available
>
> This issue proposes to handle cases where query and target are *Join*s and 
> query has compensating *Project*s as children.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (CALCITE-3276) Add MV rules to match Join on compensating Project(s)

2019-08-21 Thread jin xing (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing reassigned CALCITE-3276:
-

Assignee: jin xing

> Add MV rules to match Join on compensating Project(s)
> -
>
> Key: CALCITE-3276
> URL: https://issues.apache.org/jira/browse/CALCITE-3276
> Project: Calcite
>  Issue Type: Sub-task
>  Components: core
>Reporter: jin xing
>Assignee: jin xing
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue proposes to handle cases where query and target are *Join*s and 
> query has compensating *Project*s as children.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3260) Add support of evaluate method with default Evaluator.

2019-08-21 Thread Wang Yanlin (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912194#comment-16912194
 ] 

Wang Yanlin commented on CALCITE-3260:
--

Yes, that's better. I will update the PR.

> Add support of evaluate method with default Evaluator.
> --
>
> Key: CALCITE-3260
> URL: https://issues.apache.org/jira/browse/CALCITE-3260
> Project: Calcite
>  Issue Type: New Feature
>Reporter: Wang Yanlin
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the public method *evaluate* of AbstractNode need a Evaluator 
> object as parameter, but Evaluator class has default access control. This 
> limit the access of *evaluate* method. So may be we can add an overload 
> *evaluate* method with default Evaluator, thus, allow to evaluate Expression 
> directly.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3277) calcite-avatica-go: panic: proto: can't skip unknown wire type 4

2019-08-21 Thread Shurmin Evgeniy (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shurmin Evgeniy updated CALCITE-3277:
-
Description: 
I can't perform simple query to druid using 
{{github.com/apache/calcite-avatica-go. }}

Code:


{code:java}
package main

import (
"database/sql"
"fmt"
_ "github.com/apache/calcite-avatica-go/v4"
)

func main() {
db, err := sql.Open("avatica", 
"http://:/druid/v2/sql/avatica/;)
if err != nil { panic(err) }
rows, err := db.Query(`SELECT * FROM sys.servers`)
if err != nil { panic(err) }
defer func() {
if err := rows.Close(); err != nil { panic(err) }
}()
for rows.Next() {
var server, host float64
err = rows.Scan(, )
if err != nil { panic(err) }
fmt.Printf("server: %v, host: %v\n", server, host)
}
}
{code}



Console:

{{panic: proto: can't skip unknown wire type 4}}
 {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
 {{Process finished with exit code 2}}

Golang:

{{go version go1.12.7 darwin/amd64}}

  was:
I can't perform simple query to druid using 
{{github.com/apache/calcite-avatica-go. }}

Code:

{{package main}}

{{import (}}
{{ "database/sql"}}
{{ "fmt"}}
{{ _ "github.com/apache/calcite-avatica-go/v4"}}
{{ )}}

{{func main() {}}
{{ db, err := sql.Open("avatica", 
"http://:/druid/v2/sql/avatica/;)}}
{{ if err != nil}}

{{{ panic(err) }}}
{{ rows, err := db.Query(`SELECT * FROM sys.servers`)}}
{{ if err != nil \{ panic(err) }}}

{{defer func() {}}
{{ if err := rows.Close(); err != nil}}

{{{ panic(err) }}}
{{ }()}}
{{ for rows.Next() {}}
{{ var server, host float64}}
{{ err = rows.Scan(, )}}
{{ if err != nil \{ panic(err) }}}

{{fmt.Printf("server: %v, host: %v\n", server, host)}}
{{ }}}
{{ }}}

Console:

{{panic: proto: can't skip unknown wire type 4}}
 {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
 {{Process finished with exit code 2}}

Golang:

{{go version go1.12.7 darwin/amd64}}


> calcite-avatica-go: panic: proto: can't skip unknown wire type 4
> 
>
> Key: CALCITE-3277
> URL: https://issues.apache.org/jira/browse/CALCITE-3277
> Project: Calcite
>  Issue Type: Bug
>  Components: avatica-go
>Reporter: Shurmin Evgeniy
>Assignee: Francis Chuang
>Priority: Critical
>
> I can't perform simple query to druid using 
> {{github.com/apache/calcite-avatica-go. }}
> Code:
> {code:java}
> package main
> import (
>   "database/sql"
>   "fmt"
>   _ "github.com/apache/calcite-avatica-go/v4"
> )
> func main() {
>   db, err := sql.Open("avatica", 
> "http://:/druid/v2/sql/avatica/;)
>   if err != nil { panic(err) }
>   rows, err := db.Query(`SELECT * FROM sys.servers`)
>   if err != nil { panic(err) }
>   defer func() {
>   if err := rows.Close(); err != nil { panic(err) }
>   }()
>   for rows.Next() {
>   var server, host float64
>   err = rows.Scan(, )
>   if err != nil { panic(err) }
>   fmt.Printf("server: %v, host: %v\n", server, host)
>   }
> }
> {code}
> Console:
> {{panic: proto: can't skip unknown wire type 4}}
>  {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
>  {{Process finished with exit code 2}}
> Golang:
> {{go version go1.12.7 darwin/amd64}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-2979) Add a block-based nested loop join algorithm

2019-08-21 Thread Ruben Quesada Lopez (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912315#comment-16912315
 ] 

Ruben Quesada Lopez commented on CALCITE-2979:
--

Hi everyone,
I think the PR is in a pretty good shape, if anyone has the opportunity to take 
a look at it, it will be very helpful.
In any case, I believe we can safely push this into 1.21; even though this is a 
"brand new" join implementation, the risk is very limited since the new rule 
that generates the batch nested loop operator is not part of the "default" 
Calcite rule set, so this change should not break anything.

> Add a block-based nested loop join algorithm
> 
>
> Key: CALCITE-2979
> URL: https://issues.apache.org/jira/browse/CALCITE-2979
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.19.0
>Reporter: Stamatis Zampetakis
>Assignee: Khawla Mouhoubi
>Priority: Major
>  Labels: performance, pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Currently, Calcite provides a tuple-based nested loop join algorithm 
> implemented through EnumerableCorrelate and EnumerableDefaults.correlateJoin. 
> This means that for each tuple of the outer relation we probe (set variables) 
> in the inner relation.
> The goal of this issue is to add new algorithm (or extend the correlateJoin 
> method) which first gathers blocks (batches) of tuples from the outer 
> relation and then probes the inner relation once per block.
> There are cases (eg., indexes) where the inner relation can be accessed by 
> more than one value which can greatly improve the performance in particular 
> when the outer relation is big.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (CALCITE-3278) Simplify the use to translate RexNode to Expression for evaluating

2019-08-21 Thread Wang Yanlin (Jira)
Wang Yanlin created CALCITE-3278:


 Summary: Simplify the use to translate RexNode to Expression for 
evaluating
 Key: CALCITE-3278
 URL: https://issues.apache.org/jira/browse/CALCITE-3278
 Project: Calcite
  Issue Type: Improvement
  Components: core
Reporter: Wang Yanlin


 The method *forAggregation* of *RexToLixTranslator*  is designed to work for 
translating aggregate functions, with some parameters that we actually do not 
need, if we just want to translate a single RexNode. 
We lack a more common sense function to get an instance of RexToLixTranslator. 
And, the translated expression is a *ParameterExpression*, not fit for 
evaluating. When evaluating, we get an exception like this

{code:java}
java.lang.RuntimeException: parameter v not on stack

at org.apache.calcite.linq4j.tree.Evaluator.peek(Evaluator.java:51)
at 
org.apache.calcite.linq4j.tree.ParameterExpression.evaluate(ParameterExpression.java:55)
at 
org.apache.calcite.linq4j.tree.GotoStatement.evaluate(GotoStatement.java:97)
at 
org.apache.calcite.linq4j.tree.BlockStatement.evaluate(BlockStatement.java:83)
at org.apache.calcite.linq4j.tree.Evaluator.evaluate(Evaluator.java:55)
at 
org.apache.calcite.linq4j.tree.FunctionExpression.lambda$compile$0(FunctionExpression.java:87)
at 
org.apache.calcite.adapter.enumerable.RexToLixTranslatorTest.testRawTranslateRexNode(RexToLixTranslatorTest.java:57)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
{code}




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3278) Simplify the use to translate RexNode to Expression for evaluating

2019-08-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CALCITE-3278:

Labels: pull-request-available  (was: )

> Simplify the use to translate RexNode to Expression for evaluating
> --
>
> Key: CALCITE-3278
> URL: https://issues.apache.org/jira/browse/CALCITE-3278
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Reporter: Wang Yanlin
>Priority: Minor
>  Labels: pull-request-available
>
>  The method *forAggregation* of *RexToLixTranslator*  is designed to work for 
> translating aggregate functions, with some parameters that we actually do not 
> need, if we just want to translate a single RexNode. 
> We lack a more common sense function to get an instance of 
> RexToLixTranslator. 
> And, the translated expression is a *ParameterExpression*, not fit for 
> evaluating. When evaluating, we get an exception like this
> {code:java}
> java.lang.RuntimeException: parameter v not on stack
>   at org.apache.calcite.linq4j.tree.Evaluator.peek(Evaluator.java:51)
>   at 
> org.apache.calcite.linq4j.tree.ParameterExpression.evaluate(ParameterExpression.java:55)
>   at 
> org.apache.calcite.linq4j.tree.GotoStatement.evaluate(GotoStatement.java:97)
>   at 
> org.apache.calcite.linq4j.tree.BlockStatement.evaluate(BlockStatement.java:83)
>   at org.apache.calcite.linq4j.tree.Evaluator.evaluate(Evaluator.java:55)
>   at 
> org.apache.calcite.linq4j.tree.FunctionExpression.lambda$compile$0(FunctionExpression.java:87)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslatorTest.testRawTranslateRexNode(RexToLixTranslatorTest.java:57)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3277) calcite-avatica-go: panic: proto: can't skip unknown wire type 4

2019-08-21 Thread Shurmin Evgeniy (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shurmin Evgeniy updated CALCITE-3277:
-
Fix Version/s: avatica-go-4.0.0

> calcite-avatica-go: panic: proto: can't skip unknown wire type 4
> 
>
> Key: CALCITE-3277
> URL: https://issues.apache.org/jira/browse/CALCITE-3277
> Project: Calcite
>  Issue Type: Bug
>  Components: avatica-go
>Reporter: Shurmin Evgeniy
>Assignee: Francis Chuang
>Priority: Critical
> Fix For: avatica-go-4.0.0
>
>
> I can't perform simple query to druid using 
> {{github.com/apache/calcite-avatica-go. }}
> Code:
> {code:java}
> package main
> import (
>   "database/sql"
>   "fmt"
>   _ "github.com/apache/calcite-avatica-go/v4"
> )
> func main() {
>   db, err := sql.Open("avatica", 
> "http://:/druid/v2/sql/avatica/;)
>   if err != nil { panic(err) }
>   rows, err := db.Query(`SELECT * FROM sys.servers`)
>   if err != nil { panic(err) }
>   defer func() {
>   if err := rows.Close(); err != nil { panic(err) }
>   }()
>   for rows.Next() {
>   var server, host float64
>   err = rows.Scan(, )
>   if err != nil { panic(err) }
>   fmt.Printf("server: %v, host: %v\n", server, host)
>   }
> }
> {code}
> Console:
> {{panic: proto: can't skip unknown wire type 4}}
>  {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
>  {{Process finished with exit code 2}}
> Golang:
> {{go version go1.12.7 darwin/amd64}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (CALCITE-3277) calcite-avatica-go: panic: proto: can't skip unknown wire type 4

2019-08-21 Thread Shurmin Evgeniy (Jira)
Shurmin Evgeniy created CALCITE-3277:


 Summary: calcite-avatica-go: panic: proto: can't skip unknown wire 
type 4
 Key: CALCITE-3277
 URL: https://issues.apache.org/jira/browse/CALCITE-3277
 Project: Calcite
  Issue Type: Bug
  Components: avatica-go
Reporter: Shurmin Evgeniy
Assignee: Francis Chuang


I can't perform simple query to druid using 
{{github.com/apache/calcite-avatica-go. }}

Code:

{{package main}}

{{import (}}
{{ "database/sql"}}
{{ "fmt"}}
{{ _ "github.com/apache/calcite-avatica-go/v4"}}
{{)}}

{{func main() {}}
{{ db, err := sql.Open("avatica", 
"http://:/druid/v2/sql/avatica/;)}}
{{ if err != nil \{ panic(err) }}}
{{ rows, err := db.Query(`SELECT * FROM sys.servers`)}}
{{ if err != nil \{ panic(err) }}}
{{ defer func() {}}
{{ if err := rows.Close(); err != nil \{ panic(err) }}}
{{ }()}}
{{ for rows.Next() {}}
{{ var server, host float64}}
{{ err = rows.Scan(, )}}
{{ if err != nil \{ panic(err) }}}
{{ fmt.Printf("server: %v, host: %v\n", server, host)}}
{{ }}}
{{}}}

Console:

{{panic: proto: can't skip unknown wire type 4}}
{{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
{{Process finished with exit code 2}}

Golang:

{{go version go1.12.7 darwin/amd64}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (CALCITE-2166) Cumulative cost of RelSubset.best RelNode is increased after calling RelSubset.propagateCostImprovements() for input RelNodes

2019-08-21 Thread Volodymyr Vysotskyi (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-2166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912124#comment-16912124
 ] 

Volodymyr Vysotskyi edited comment on CALCITE-2166 at 8/21/19 11:58 AM:


Thanks, Xiening Dai, for moving this issue forward! I've also considered 
similar options some time ago, and here are my thoughts regarding them:
1 & 2: Currently we store the best cost value to avoid recalculating it every 
time when a new rel node is added to the rel subset. For the case when we have 
to recalculate its cost, we also have to clear rel metadata cache for current 
rel node, and for the case when something is changed, also recalculate parent 
rel nodes costs.

I agree that #3 is too complex, and I don't have other solutions for this issue.

But in either case, it would be good if 1 or 2 option will help to fix this 
issue partially without introducing significant performance degradation.


was (Author: vvysotskyi):
Thanks, Xiening Dai, for moving this issue forward! I've also considered 
similar options, and here are my thoughts regarding them:
1 & 2: Currently we store the best cost value to avoid recalculating it every 
time when a new rel node is added to the rel subset. For the case when we have 
to recalculate its cost, we also have to clear rel metadata cache for current 
rel node, and for the case when something is changed, also recalculate parent 
rel nodes costs.

I agree that #3 is too complex, and I don't have other solutions for this issue.

But in either case, it would be good if 1 or 2 option will help to fix this 
issue partially without introducing significant performance degradation.

> Cumulative cost of RelSubset.best RelNode is increased after calling 
> RelSubset.propagateCostImprovements() for input RelNodes
> -
>
> Key: CALCITE-2166
> URL: https://issues.apache.org/jira/browse/CALCITE-2166
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.15.0
>Reporter: Volodymyr Vysotskyi
>Assignee: Danny Chan
>Priority: Critical
>
> After calling {{RelSubset.propagateCostImprovements()}} cumulative cost of 
> {{RelSubset.best}} {{RelNode}} may be increased due to the increase of the 
> non-cumulative cost caused by changing of input best {{RelNode}}.
> To observe this issue, add this code:
> {code:java}
>   if (subset.best != null) {
> RelOptCost bestCost = getCost(subset.best, 
> RelMetadataQuery.instance());
> if (!subset.bestCost.equals(bestCost)) {
>   throw new AssertionError(
> "relSubset [" + subset.getDescription()
>   + "] has wrong best cost "
>   + subset.bestCost + ". Correct cost is " + bestCost);
> }
>   }
> {code}
> into {{VolcanoPlanner.validate()}} method (line 907).
> List of unit tests which fail with this check:
> {noformat}
> Failed tests: 
>   
> MaterializationTest.testJoinMaterializationUKFK9:1823->checkMaterialize:198->checkMaterialize:205->checkThatMaterialize:233
>  relSubset [rel#226287:Subset#8.ENUMERABLE.[]] has wrong best cost {221.5 
> rows, 128.25 cpu, 0.0 io}. Correct cost is {233.0 rows, 178.0 cpu, 0.0 io}
>   ScannableTableTest.testPFPushDownProjectFilterAggregateNested:279 relSubset 
> [rel#12950:Subset#5.ENUMERABLE.[]] has wrong best cost {63.8 rows, 62.308 
> cpu, 0.0 io}. Correct cost is {70.4 rows, 60.404 cpu, 0.0 io}
>   ScannableTableTest.testPFTableRefusesFilterCooperative:221 relSubset 
> [rel#13382:Subset#2.ENUMERABLE.[]] has wrong best cost {81.0 rows, 181.01 
> cpu, 0.0 io}. Correct cost is {150.5 rows, 250.505 cpu, 0.0 io}
>   ScannableTableTest.testProjectableFilterableCooperative:148 relSubset 
> [rel#13611:Subset#2.ENUMERABLE.[]] has wrong best cost {81.0 rows, 181.01 
> cpu, 0.0 io}. Correct cost is {150.5 rows, 250.505 cpu, 0.0 io}
>   ScannableTableTest.testProjectableFilterableNonCooperative:165 relSubset 
> [rel#13754:Subset#2.ENUMERABLE.[]] has wrong best cost {81.0 rows, 181.01 
> cpu, 0.0 io}. Correct cost is {150.5 rows, 250.505 cpu, 0.0 io}
>   FrameworksTest.testUpdate:336->executeQuery:367 relSubset 
> [rel#22533:Subset#2.ENUMERABLE.any] has wrong best cost {19.5 rows, 37.75 
> cpu, 0.0 io}. Correct cost is {22.575 rows, 52.58 cpu, 0.0 io}
> {noformat}
> For the test {{MaterializationTest.testJoinMaterializationUKFK9}} initial 
> best plan was:
> {noformat}
> EnumerableProject(empid0=[$5], empid00=[$5], deptno0=[$7]): rowcount = 15.0, 
> cumulative cost = {15.0 rows, 45.0 cpu, 0.0 io}, id = 3989
>   EnumerableJoin(subset=[rel#3988:Subset#34.ENUMERABLE.[]], condition=[=($1, 
> $7)], joinType=[inner]): rowcount = 15.0, cumulative cost = {116.0 rows, 0.0 
> cpu, 

[jira] [Updated] (CALCITE-3277) calcite-avatica-go: panic: proto: can't skip unknown wire type 4

2019-08-21 Thread Shurmin Evgeniy (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shurmin Evgeniy updated CALCITE-3277:
-
Description: 
I can't perform simple query to druid using 
{{github.com/apache/calcite-avatica-go. }}

Code:

{{package main}}

{{import (}}
{{ "database/sql"}}
{{ "fmt"}}
{{ _ "github.com/apache/calcite-avatica-go/v4"}}
{{ )}}

{{func main() {}}
{{ db, err := sql.Open("avatica", 
"http://:/druid/v2/sql/avatica/;)}}
{{ if err != nil}}

{{{ panic(err) }}}
{{ rows, err := db.Query(`SELECT * FROM sys.servers`)}}
{{ if err != nil \{ panic(err) }}}

{{defer func() {}}
{{ if err := rows.Close(); err != nil}}

{{{ panic(err) }}}
{{ }()}}
{{ for rows.Next() {}}
{{ var server, host float64}}
{{ err = rows.Scan(, )}}
{{ if err != nil \{ panic(err) }}}

{{fmt.Printf("server: %v, host: %v\n", server, host)}}
{{ }}}
{{ }}}

Console:

{{panic: proto: can't skip unknown wire type 4}}
 {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
 {{Process finished with exit code 2}}

Golang:

{{go version go1.12.7 darwin/amd64}}

  was:
I can't perform simple query to druid using 
{{github.com/apache/calcite-avatica-go. }}

Code:

{{package main}}

{{import (}}
{{ "database/sql"}}
{{ "fmt"}}
{{ _ "github.com/apache/calcite-avatica-go/v4"}}
{{)}}

{{func main() {}}
{{ db, err := sql.Open("avatica", 
"http://:/druid/v2/sql/avatica/;)}}
{{ if err != nil \{ panic(err) }}}
{{ rows, err := db.Query(`SELECT * FROM sys.servers`)}}
{{ if err != nil \{ panic(err) }}}
{{ defer func() {}}
{{ if err := rows.Close(); err != nil \{ panic(err) }}}
{{ }()}}
{{ for rows.Next() {}}
{{ var server, host float64}}
{{ err = rows.Scan(, )}}
{{ if err != nil \{ panic(err) }}}
{{ fmt.Printf("server: %v, host: %v\n", server, host)}}
{{ }}}
{{}}}

Console:

{{panic: proto: can't skip unknown wire type 4}}
{{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
{{Process finished with exit code 2}}

Golang:

{{go version go1.12.7 darwin/amd64}}


> calcite-avatica-go: panic: proto: can't skip unknown wire type 4
> 
>
> Key: CALCITE-3277
> URL: https://issues.apache.org/jira/browse/CALCITE-3277
> Project: Calcite
>  Issue Type: Bug
>  Components: avatica-go
>Reporter: Shurmin Evgeniy
>Assignee: Francis Chuang
>Priority: Critical
>
> I can't perform simple query to druid using 
> {{github.com/apache/calcite-avatica-go. }}
> Code:
> {{package main}}
> {{import (}}
> {{ "database/sql"}}
> {{ "fmt"}}
> {{ _ "github.com/apache/calcite-avatica-go/v4"}}
> {{ )}}
> {{func main() {}}
> {{ db, err := sql.Open("avatica", 
> "http://:/druid/v2/sql/avatica/;)}}
> {{ if err != nil}}
> {{{ panic(err) }}}
> {{ rows, err := db.Query(`SELECT * FROM sys.servers`)}}
> {{ if err != nil \{ panic(err) }}}
> {{defer func() {}}
> {{ if err := rows.Close(); err != nil}}
> {{{ panic(err) }}}
> {{ }()}}
> {{ for rows.Next() {}}
> {{ var server, host float64}}
> {{ err = rows.Scan(, )}}
> {{ if err != nil \{ panic(err) }}}
> {{fmt.Printf("server: %v, host: %v\n", server, host)}}
> {{ }}}
> {{ }}}
> Console:
> {{panic: proto: can't skip unknown wire type 4}}
>  {{goroutine 1 [running]:main.main() main.go:17 +0x30d}}
>  {{Process finished with exit code 2}}
> Golang:
> {{go version go1.12.7 darwin/amd64}}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3278) Simplify the use to translate RexNode to Expression for evaluating

2019-08-21 Thread Ruben Quesada Lopez (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912347#comment-16912347
 ] 

Ruben Quesada Lopez commented on CALCITE-3278:
--

Please note that there is another (broader) ticket regarding 
RexNode-to-Expression translation: CALCITE-3224. I'm not sure if it might 
interfere or supersede this one.

> Simplify the use to translate RexNode to Expression for evaluating
> --
>
> Key: CALCITE-3278
> URL: https://issues.apache.org/jira/browse/CALCITE-3278
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Reporter: Wang Yanlin
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  The method *forAggregation* of *RexToLixTranslator*  is designed to work for 
> translating aggregate functions, with some parameters that we actually do not 
> need, if we just want to translate a single RexNode. 
> We lack a more common sense function to get an instance of 
> RexToLixTranslator. 
> And, the translated expression is a *ParameterExpression*, not fit for 
> evaluating. When evaluating, we get an exception like this
> {code:java}
> java.lang.RuntimeException: parameter v not on stack
>   at org.apache.calcite.linq4j.tree.Evaluator.peek(Evaluator.java:51)
>   at 
> org.apache.calcite.linq4j.tree.ParameterExpression.evaluate(ParameterExpression.java:55)
>   at 
> org.apache.calcite.linq4j.tree.GotoStatement.evaluate(GotoStatement.java:97)
>   at 
> org.apache.calcite.linq4j.tree.BlockStatement.evaluate(BlockStatement.java:83)
>   at org.apache.calcite.linq4j.tree.Evaluator.evaluate(Evaluator.java:55)
>   at 
> org.apache.calcite.linq4j.tree.FunctionExpression.lambda$compile$0(FunctionExpression.java:87)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslatorTest.testRawTranslateRexNode(RexToLixTranslatorTest.java:57)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (CALCITE-3246) RelJsonReader throws NullPointerException while deserializing from JSON a call to a user-defined function (UDF)

2019-08-21 Thread Wang Yanlin (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912199#comment-16912199
 ] 

Wang Yanlin edited comment on CALCITE-3246 at 8/21/19 2:39 PM:
---

OK. And, any suggestion for updating the PR? [~julianhyde]


was (Author: yanlin-lynn):
OK. And, any suggestion for update?

> RelJsonReader throws NullPointerException while deserializing from JSON a 
> call to a user-defined function (UDF)
> ---
>
> Key: CALCITE-3246
> URL: https://issues.apache.org/jira/browse/CALCITE-3246
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Wang Yanlin
>Assignee: Chunwei Lei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.21.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> when deserializing of logical rel with udf operator, NPE occurs.
> The exception stacktrace as follow.
> {code:java}
> java.lang.RuntimeException: java.lang.NullPointerException
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:181)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:125)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:143)
>   at org.apache.calcite.plan.RelWriterTest.testUdf(RelWriterTest.java:598)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: java.lang.NullPointerException
>   at java.util.Objects.requireNonNull(Objects.java:203)
>   at org.apache.calcite.rex.RexCall.(RexCall.java:83)
>   at org.apache.calcite.rex.RexBuilder.makeCall(RexBuilder.java:237)
>   at org.apache.calcite.rel.externalize.RelJson.toRex(RelJson.java:485)
>   at 
> org.apache.calcite.rel.externalize.RelJsonReader$2.getExpressionList(RelJsonReader.java:204)
>   at org.apache.calcite.rel.core.Project.(Project.java:100)
>   at 
> org.apache.calcite.rel.logical.LogicalProject.(LogicalProject.java:88)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.rel.externalize.RelJsonReader.readRel(RelJsonReader.java:261)
>   at 
> org.apache.calcite.rel.externalize.RelJsonReader.readRels(RelJsonReader.java:91)
>   at 
> org.apache.calcite.rel.externalize.RelJsonReader.read(RelJsonReader.java:85)
>   at 
> org.apache.calcite.plan.RelWriterTest.lambda$testUdf$7(RelWriterTest.java:603)
>   at 
> org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:130)
>   at 
> org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:915)
>   

[jira] [Updated] (CALCITE-3279) java.lang.ExceptionInInitializerError

2019-08-21 Thread xzh_dz (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xzh_dz updated CALCITE-3279:

Description: 
When i run the SparkRules main method.And i get the exception as below.

!image-2019-08-21-23-17-02-049.png|width=653,height=154!

  was:
When i run the SparkRules main method.And i get the exception as below.

!image-2019-08-21-23-17-02-049.png!


> java.lang.ExceptionInInitializerError
> -
>
> Key: CALCITE-3279
> URL: https://issues.apache.org/jira/browse/CALCITE-3279
> Project: Calcite
>  Issue Type: Bug
>Reporter: xzh_dz
>Priority: Minor
> Attachments: image-2019-08-21-23-17-02-049.png
>
>
> When i run the SparkRules main method.And i get the exception as below.
> !image-2019-08-21-23-17-02-049.png|width=653,height=154!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (CALCITE-3279) java.lang.ExceptionInInitializerError

2019-08-21 Thread xzh_dz (Jira)
xzh_dz created CALCITE-3279:
---

 Summary: java.lang.ExceptionInInitializerError
 Key: CALCITE-3279
 URL: https://issues.apache.org/jira/browse/CALCITE-3279
 Project: Calcite
  Issue Type: Bug
Reporter: xzh_dz
 Attachments: image-2019-08-21-23-17-02-049.png

When i run the SparkRules main method.And i get the exception as below.

!image-2019-08-21-23-17-02-049.png!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3279) java.lang.ExceptionInInitializerError

2019-08-21 Thread xzh_dz (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xzh_dz updated CALCITE-3279:

Attachment: image-2019-08-22-00-01-34-306.png

> java.lang.ExceptionInInitializerError
> -
>
> Key: CALCITE-3279
> URL: https://issues.apache.org/jira/browse/CALCITE-3279
> Project: Calcite
>  Issue Type: Bug
>Reporter: xzh_dz
>Priority: Minor
> Attachments: image-2019-08-21-23-17-02-049.png, 
> image-2019-08-22-00-01-34-306.png
>
>
> When i run the SparkRules main method.And i get the exception as below.
> !image-2019-08-21-23-17-02-049.png|width=653,height=154!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (CALCITE-3279) java.lang.ExceptionInInitializerError

2019-08-21 Thread xzh_dz (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xzh_dz updated CALCITE-3279:

Attachment: image-2019-08-22-00-02-56-686.png

> java.lang.ExceptionInInitializerError
> -
>
> Key: CALCITE-3279
> URL: https://issues.apache.org/jira/browse/CALCITE-3279
> Project: Calcite
>  Issue Type: Bug
>Reporter: xzh_dz
>Priority: Minor
> Attachments: image-2019-08-21-23-17-02-049.png, 
> image-2019-08-22-00-01-34-306.png, image-2019-08-22-00-02-56-686.png
>
>
> When i run the SparkRules main method.And i get the exception as below.
> !image-2019-08-21-23-17-02-049.png|width=653,height=154!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (CALCITE-3279) java.lang.ExceptionInInitializerError

2019-08-21 Thread xzh_dz (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16912451#comment-16912451
 ] 

xzh_dz commented on CALCITE-3279:
-

!image-2019-08-22-00-02-56-686.png|width=489,height=101!

print the dependency:tree.

> java.lang.ExceptionInInitializerError
> -
>
> Key: CALCITE-3279
> URL: https://issues.apache.org/jira/browse/CALCITE-3279
> Project: Calcite
>  Issue Type: Bug
>Reporter: xzh_dz
>Priority: Minor
> Attachments: image-2019-08-21-23-17-02-049.png, 
> image-2019-08-22-00-01-34-306.png, image-2019-08-22-00-02-56-686.png
>
>
> When i run the SparkRules main method.And i get the exception as below.
> !image-2019-08-21-23-17-02-049.png|width=653,height=154!



--
This message was sent by Atlassian Jira
(v8.3.2#803003)