[jira] [Commented] (CALCITE-3511) Failed to serialize SqlNode to json string using jackson

2019-11-21 Thread groobyming (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979942#comment-16979942
 ] 

groobyming commented on CALCITE-3511:
-

Thanks for all your replies [~danny0405] [~julianhyde], I will try to create a 
tool that converts SqlNode to Json string

> Failed to serialize SqlNode to json string using jackson
> 
>
> Key: CALCITE-3511
> URL: https://issues.apache.org/jira/browse/CALCITE-3511
> Project: Calcite
>  Issue Type: New Feature
>  Components: core
>Affects Versions: 1.21.0
>Reporter: groobyming
>Priority: Major
>  Labels: SqlNode, calcite, calcite-core, jackson, json
> Attachments: screenshot-1.png
>
>
> Failed to serialize SqlNode to json string using jackson, Exception as 
> follows:
> Caused by: java.lang.UnsupportedOperationException: class 
> org.apache.calcite.sql.SqlSelectOperator: SELECT
>   at org.apache.calcite.util.Util.needToImplement(Util.java:967)
>   at 
> org.apache.calcite.sql.SqlOperator.getOperandCountRange(SqlOperator.java:201)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:664)
>   at 
> com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:689)
>   ... 32 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (CALCITE-3431) SemiJoinRule doesn't work when right-hand side is not Aggregate but distinct on join keys

2019-11-21 Thread Jin Xing (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jin Xing reopened CALCITE-3431:
---

> SemiJoinRule doesn't work when right-hand side is not Aggregate but distinct 
> on join keys
> -
>
> Key: CALCITE-3431
> URL: https://issues.apache.org/jira/browse/CALCITE-3431
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Reporter: Jin Xing
>Assignee: Jin Xing
>Priority: Major
>
> Calcite generate semi-join by SemiJoinRule, which only works on pattern of 
> Join(RelNode, Aggregate).
> Take below sql as an example 
> {code:java}
>  select * from dept 
>  where exists (
>   select * from emp
>   where emp.empno = dept.deptno)
> {code}
> If empno is a key column of emp, thus the original plan is as below
> {code:java}
> LogicalProject(DEPTNO=[$0], NAME=[$1])
>   LogicalJoin(condition=[=($0, $2)], joinType=[inner])
> LogicalTableScan(table=[[CATALOG, SALES, DEPT]])
> LogicalProject(EMPNO=[$0])
>   LogicalTableScan(table=[[CATALOG, SALES, EMP]])
> {code}
> Thus SemiJoinRule cannot support such a pattern
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (CALCITE-3431) SemiJoinRule doesn't work when right-hand side is not Aggregate but distinct on join keys

2019-11-21 Thread Jin Xing (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jin Xing closed CALCITE-3431.
-
Resolution: Won't Fix

> SemiJoinRule doesn't work when right-hand side is not Aggregate but distinct 
> on join keys
> -
>
> Key: CALCITE-3431
> URL: https://issues.apache.org/jira/browse/CALCITE-3431
> Project: Calcite
>  Issue Type: Improvement
>  Components: core
>Reporter: Jin Xing
>Assignee: Jin Xing
>Priority: Major
>
> Calcite generate semi-join by SemiJoinRule, which only works on pattern of 
> Join(RelNode, Aggregate).
> Take below sql as an example 
> {code:java}
>  select * from dept 
>  where exists (
>   select * from emp
>   where emp.empno = dept.deptno)
> {code}
> If empno is a key column of emp, thus the original plan is as below
> {code:java}
> LogicalProject(DEPTNO=[$0], NAME=[$1])
>   LogicalJoin(condition=[=($0, $2)], joinType=[inner])
> LogicalTableScan(table=[[CATALOG, SALES, DEPT]])
> LogicalProject(EMPNO=[$0])
>   LogicalTableScan(table=[[CATALOG, SALES, EMP]])
> {code}
> Thus SemiJoinRule cannot support such a pattern
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (CALCITE-3505) Infinite matching of FilterProjectTransposeRule causes stackoverflow

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979934#comment-16979934
 ] 

Jin Xing edited comment on CALCITE-3505 at 11/22/19 7:36 AM:
-

I strongly think we should resolve the infinite rule matching caused by cycle 
from engine level. Because when a new Calcite user create a RelOptRule, if 
infinite matching happens, it's really hard to debug the root cause and work 
out a quick solution like [link PR|https://github.com/apache/calcite/pull/1292]


was (Author: jinxing6...@126.com):
I strongly think we should resolve the infinite rule matching caused by cycle 
by engine level. Because when a new Calcite user create a RelOptRule, if 
infinite matching happens, it's really hard to debug the root cause and work 
out a quick solution like [link PR|https://github.com/apache/calcite/pull/1292]

> Infinite matching of FilterProjectTransposeRule causes stackoverflow
> 
>
> Key: CALCITE-3505
> URL: https://issues.apache.org/jira/browse/CALCITE-3505
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Jin Xing
>Priority: Major
> Attachments: graphviz.svg
>
>
> Run ScannableTableTest#testProjectableFilterableTableJoin with minor change 
> to reproduce
> {code:java}
> @Test public void testProjectableFilterableTableJoin() throws Exception {
>   final StringBuilder buf = new StringBuilder();
>   final String explain = "PLAN="
>   + "EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b1]], filters=[[=($0, 10)]])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b2]], filters=[[=($0, 10)]])";
>   CalciteAssert.that()
>   .with(
> newSchema("s",
> Pair.of("b1", new BeatlesProjectableFilterableTable(buf, 
> true)),
> Pair.of("b2", new BeatlesProjectableFilterableTable(buf, 
> true
>   .query("select * from \"s\".\"b1\", \"s\".\"b2\" "
>   + "where \"s\".\"b1\".\"i\" = 10 and \"s\".\"b2\".\"i\" = 
> 10 "
>   + "and \"s\".\"b1\".\"i\" = \"s\".\"b2\".\"i\"")
>   .withHook(Hook.PLANNER, (Consumer) planner -> {
> planner.removeRule(EnumerableRules.ENUMERABLE_MERGE_JOIN_RULE);
>   })
>   .explainContains(explain);
> }
> {code}
> This test has nothing to do with ENUMERABLE_MERGE_JOIN_RULE, but if we 
> disable it with planner hook, stackoverflow happens;
> I debugged and found that FilterProjectTransposeRule is matched infinitely 
> but not sure the root cause.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (CALCITE-3505) Infinite matching of FilterProjectTransposeRule causes stackoverflow

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979934#comment-16979934
 ] 

Jin Xing edited comment on CALCITE-3505 at 11/22/19 7:35 AM:
-

I strongly think we should resolve the infinite rule matching caused by cycle 
by engine level. Because when a new Calcite user create a RelOptRule, if 
infinite matching happens, it's really hard to debug the root cause and work 
out a quick solution like [link PR|https://github.com/apache/calcite/pull/1292]


was (Author: jinxing6...@126.com):
I strongly think we should resolve the infinite rule matching caused by cycle 
by engine level. Because when a new Calcite user create a RelOptRule, if 
infinite matching happens, it's really hard to debug the root cause and work 
out a quick solution like [PR|[https://github.com/apache/calcite/pull/1292]]

> Infinite matching of FilterProjectTransposeRule causes stackoverflow
> 
>
> Key: CALCITE-3505
> URL: https://issues.apache.org/jira/browse/CALCITE-3505
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Jin Xing
>Priority: Major
> Attachments: graphviz.svg
>
>
> Run ScannableTableTest#testProjectableFilterableTableJoin with minor change 
> to reproduce
> {code:java}
> @Test public void testProjectableFilterableTableJoin() throws Exception {
>   final StringBuilder buf = new StringBuilder();
>   final String explain = "PLAN="
>   + "EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b1]], filters=[[=($0, 10)]])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b2]], filters=[[=($0, 10)]])";
>   CalciteAssert.that()
>   .with(
> newSchema("s",
> Pair.of("b1", new BeatlesProjectableFilterableTable(buf, 
> true)),
> Pair.of("b2", new BeatlesProjectableFilterableTable(buf, 
> true
>   .query("select * from \"s\".\"b1\", \"s\".\"b2\" "
>   + "where \"s\".\"b1\".\"i\" = 10 and \"s\".\"b2\".\"i\" = 
> 10 "
>   + "and \"s\".\"b1\".\"i\" = \"s\".\"b2\".\"i\"")
>   .withHook(Hook.PLANNER, (Consumer) planner -> {
> planner.removeRule(EnumerableRules.ENUMERABLE_MERGE_JOIN_RULE);
>   })
>   .explainContains(explain);
> }
> {code}
> This test has nothing to do with ENUMERABLE_MERGE_JOIN_RULE, but if we 
> disable it with planner hook, stackoverflow happens;
> I debugged and found that FilterProjectTransposeRule is matched infinitely 
> but not sure the root cause.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3505) Infinite matching of FilterProjectTransposeRule causes stackoverflow

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979934#comment-16979934
 ] 

Jin Xing commented on CALCITE-3505:
---

I strongly think we should resolve the infinite rule matching caused by cycle 
by engine level. Because when a new Calcite user create a RelOptRule, if 
infinite matching happens, it's really hard to debug the root cause and work 
out a quick solution like [PR|[https://github.com/apache/calcite/pull/1292]]

> Infinite matching of FilterProjectTransposeRule causes stackoverflow
> 
>
> Key: CALCITE-3505
> URL: https://issues.apache.org/jira/browse/CALCITE-3505
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Jin Xing
>Priority: Major
> Attachments: graphviz.svg
>
>
> Run ScannableTableTest#testProjectableFilterableTableJoin with minor change 
> to reproduce
> {code:java}
> @Test public void testProjectableFilterableTableJoin() throws Exception {
>   final StringBuilder buf = new StringBuilder();
>   final String explain = "PLAN="
>   + "EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b1]], filters=[[=($0, 10)]])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b2]], filters=[[=($0, 10)]])";
>   CalciteAssert.that()
>   .with(
> newSchema("s",
> Pair.of("b1", new BeatlesProjectableFilterableTable(buf, 
> true)),
> Pair.of("b2", new BeatlesProjectableFilterableTable(buf, 
> true
>   .query("select * from \"s\".\"b1\", \"s\".\"b2\" "
>   + "where \"s\".\"b1\".\"i\" = 10 and \"s\".\"b2\".\"i\" = 
> 10 "
>   + "and \"s\".\"b1\".\"i\" = \"s\".\"b2\".\"i\"")
>   .withHook(Hook.PLANNER, (Consumer) planner -> {
> planner.removeRule(EnumerableRules.ENUMERABLE_MERGE_JOIN_RULE);
>   })
>   .explainContains(explain);
> }
> {code}
> This test has nothing to do with ENUMERABLE_MERGE_JOIN_RULE, but if we 
> disable it with planner hook, stackoverflow happens;
> I debugged and found that FilterProjectTransposeRule is matched infinitely 
> but not sure the root cause.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (CALCITE-3505) Infinite matching of FilterProjectTransposeRule causes stackoverflow

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979934#comment-16979934
 ] 

Jin Xing edited comment on CALCITE-3505 at 11/22/19 7:34 AM:
-

I strongly think we should resolve the infinite rule matching caused by cycle 
by engine level. Because when a new Calcite user create a RelOptRule, if 
infinite matching happens, it's really hard to debug the root cause and work 
out a quick solution like [PR|[https://github.com/apache/calcite/pull/1292]]


was (Author: jinxing6...@126.com):
I strongly think we should resolve the infinite rule matching caused by cycle 
by engine level. Because when a new Calcite user create a RelOptRule, if 
infinite matching happens, it's really hard to debug the root cause and work 
out a quick solution like [PR|[https://github.com/apache/calcite/pull/1292]]

> Infinite matching of FilterProjectTransposeRule causes stackoverflow
> 
>
> Key: CALCITE-3505
> URL: https://issues.apache.org/jira/browse/CALCITE-3505
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Jin Xing
>Priority: Major
> Attachments: graphviz.svg
>
>
> Run ScannableTableTest#testProjectableFilterableTableJoin with minor change 
> to reproduce
> {code:java}
> @Test public void testProjectableFilterableTableJoin() throws Exception {
>   final StringBuilder buf = new StringBuilder();
>   final String explain = "PLAN="
>   + "EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b1]], filters=[[=($0, 10)]])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b2]], filters=[[=($0, 10)]])";
>   CalciteAssert.that()
>   .with(
> newSchema("s",
> Pair.of("b1", new BeatlesProjectableFilterableTable(buf, 
> true)),
> Pair.of("b2", new BeatlesProjectableFilterableTable(buf, 
> true
>   .query("select * from \"s\".\"b1\", \"s\".\"b2\" "
>   + "where \"s\".\"b1\".\"i\" = 10 and \"s\".\"b2\".\"i\" = 
> 10 "
>   + "and \"s\".\"b1\".\"i\" = \"s\".\"b2\".\"i\"")
>   .withHook(Hook.PLANNER, (Consumer) planner -> {
> planner.removeRule(EnumerableRules.ENUMERABLE_MERGE_JOIN_RULE);
>   })
>   .explainContains(explain);
> }
> {code}
> This test has nothing to do with ENUMERABLE_MERGE_JOIN_RULE, but if we 
> disable it with planner hook, stackoverflow happens;
> I debugged and found that FilterProjectTransposeRule is matched infinitely 
> but not sure the root cause.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (CALCITE-3505) Infinite matching of FilterProjectTransposeRule causes stackoverflow

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979925#comment-16979925
 ] 

Jin Xing edited comment on CALCITE-3505 at 11/22/19 7:18 AM:
-

Currently, Calcite checks whether to skip a rule match by [link|#L520]]. But it 
only checks the cycle along the path of rule operands. In this case the cycle 
exists but longer than the path of operands.

[https://github.com/apache/calcite/pull/855] refines and checks cycle to 
include input of the operands. But I think it might be not enough. Think that 
there are three RelSubsets in the cycle  and the operands length is 2. 

Intuitively I think there are two approaches to resolve this issue.
 # Ban the cycles. I will go with [~julianhyde] in CALCITE-2223 and aggree that 
we will lose a lot by this approach. Actually cycle can happen very easily – 
RelSubset which contains an identity Project will reference itself.
 # Remember the matching history – – if a group of nodes was matched before by 
a certain rule, there's no need to do the matching again.


was (Author: jinxing6...@126.com):
Currently, Calcite checks whether to skip a rule match by 
[link|[https://github.com/apache/calcite/blob/295ab13e8338bdd0e0c29e051907371c9b2929aa/core/src/main/java/org/apache/calcite/plan/volcano/RuleQueue.java#L520]].
 But it only checks the cycle along the path of rule operands. In this case the 
cycle exists but longer than the path of operands.

[https://github.com/apache/calcite/pull/855] refines and checks cycle to 
include input of the operands. But I think it might be not enough. Think that 
there are three RelSubsets in the cycle  and the operands length is 2. 

Intuitively I think there are two approaches to resolve this issue.
 # Ban the cycles. I will go with [~julianhyde] in CALCITE-2223 that we will 
lose a lot by this approach. Actually cycle can happen very easily – RelSubset 
which contains an identity Project will reference itself.
 # Remember the matching history – – if a group of nodes was matched before by 
a certain rule, there's no need to do the matching again.

> Infinite matching of FilterProjectTransposeRule causes stackoverflow
> 
>
> Key: CALCITE-3505
> URL: https://issues.apache.org/jira/browse/CALCITE-3505
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Jin Xing
>Priority: Major
> Attachments: graphviz.svg
>
>
> Run ScannableTableTest#testProjectableFilterableTableJoin with minor change 
> to reproduce
> {code:java}
> @Test public void testProjectableFilterableTableJoin() throws Exception {
>   final StringBuilder buf = new StringBuilder();
>   final String explain = "PLAN="
>   + "EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b1]], filters=[[=($0, 10)]])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b2]], filters=[[=($0, 10)]])";
>   CalciteAssert.that()
>   .with(
> newSchema("s",
> Pair.of("b1", new BeatlesProjectableFilterableTable(buf, 
> true)),
> Pair.of("b2", new BeatlesProjectableFilterableTable(buf, 
> true
>   .query("select * from \"s\".\"b1\", \"s\".\"b2\" "
>   + "where \"s\".\"b1\".\"i\" = 10 and \"s\".\"b2\".\"i\" = 
> 10 "
>   + "and \"s\".\"b1\".\"i\" = \"s\".\"b2\".\"i\"")
>   .withHook(Hook.PLANNER, (Consumer) planner -> {
> planner.removeRule(EnumerableRules.ENUMERABLE_MERGE_JOIN_RULE);
>   })
>   .explainContains(explain);
> }
> {code}
> This test has nothing to do with ENUMERABLE_MERGE_JOIN_RULE, but if we 
> disable it with planner hook, stackoverflow happens;
> I debugged and found that FilterProjectTransposeRule is matched infinitely 
> but not sure the root cause.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3505) Infinite matching of FilterProjectTransposeRule causes stackoverflow

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979925#comment-16979925
 ] 

Jin Xing commented on CALCITE-3505:
---

Currently, Calcite checks whether to skip a rule match by 
[link|[https://github.com/apache/calcite/blob/295ab13e8338bdd0e0c29e051907371c9b2929aa/core/src/main/java/org/apache/calcite/plan/volcano/RuleQueue.java#L520]].
 But it only checks the cycle along the path of rule operands. In this case the 
cycle exists but longer than the path of operands.

[https://github.com/apache/calcite/pull/855] refines and checks cycle to 
include input of the operands. But I think it might be not enough. Think that 
there are three RelSubsets in the cycle  and the operands length is 2. 

Intuitively I think there are two approaches to resolve this issue.
 # Ban the cycles. I will go with [~julianhyde] in CALCITE-2223 that we will 
lose a lot by this approach. Actually cycle can happen very easily – RelSubset 
which contains an identity Project will reference itself.
 # Remember the matching history – – if a group of nodes was matched before by 
a certain rule, there's no need to do the matching again.

> Infinite matching of FilterProjectTransposeRule causes stackoverflow
> 
>
> Key: CALCITE-3505
> URL: https://issues.apache.org/jira/browse/CALCITE-3505
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Jin Xing
>Priority: Major
> Attachments: graphviz.svg
>
>
> Run ScannableTableTest#testProjectableFilterableTableJoin with minor change 
> to reproduce
> {code:java}
> @Test public void testProjectableFilterableTableJoin() throws Exception {
>   final StringBuilder buf = new StringBuilder();
>   final String explain = "PLAN="
>   + "EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b1]], filters=[[=($0, 10)]])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b2]], filters=[[=($0, 10)]])";
>   CalciteAssert.that()
>   .with(
> newSchema("s",
> Pair.of("b1", new BeatlesProjectableFilterableTable(buf, 
> true)),
> Pair.of("b2", new BeatlesProjectableFilterableTable(buf, 
> true
>   .query("select * from \"s\".\"b1\", \"s\".\"b2\" "
>   + "where \"s\".\"b1\".\"i\" = 10 and \"s\".\"b2\".\"i\" = 
> 10 "
>   + "and \"s\".\"b1\".\"i\" = \"s\".\"b2\".\"i\"")
>   .withHook(Hook.PLANNER, (Consumer) planner -> {
> planner.removeRule(EnumerableRules.ENUMERABLE_MERGE_JOIN_RULE);
>   })
>   .explainContains(explain);
> }
> {code}
> This test has nothing to do with ENUMERABLE_MERGE_JOIN_RULE, but if we 
> disable it with planner hook, stackoverflow happens;
> I debugged and found that FilterProjectTransposeRule is matched infinitely 
> but not sure the root cause.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (CALCITE-3505) Infinite matching of FilterProjectTransposeRule causes stackoverflow

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979910#comment-16979910
 ] 

Jin Xing edited comment on CALCITE-3505 at 11/22/19 6:51 AM:
-

[~hyuan] I commented in CALCITE-3142 and gave my understanding about the root 
cause.

The root cause of this issue is different from CALCITE-3142 but similar with 
CALCITE-2223.

Yes, [~zabetak] . It's caused by cycle between RelSubsets.

The infinite loop happens by below steps.

Step-1:

The Sql and original plan is as below 
{code:java}
select * from b1, b2
where b1.i = 10 and b2.i = 10
and b1.i = b2.i

LogicalProject(i=[$0], j=[$1], k=[$2], i0=[$3], j0=[$4], k0=[$5])
  LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])
LogicalJoin(condition=[true], joinType=[inner])
  LogicalTableScan(table=[[s, b1]])
  LogicalTableScan(table=[[s, b2]])
{code}
 

Step-2:

Thanks to FilterJoinRule, ProjectRemoveRule & JoinCommuteRule, the original 
plan can be transformed to be equivalents and collect as RelSubset-M:
{code:java}
RelSubset-M:
LogicalProject(i=[$0], j=[$1], k=[$2], i0=[$3], j0=[$4], k0=[$5]) 
--> OriginalPlan
 LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])
   LogicalJoin(condition=[true], joinType=[inner])
 LogicalTableScan(table=[[s, b1]])
 LogicalTableScan(table=[[s, b2]])

LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])   
--> OriginalPlan-equiv0
  LogicalJoin(condition=[true], joinType=[inner])
LogicalTableScan(table=[[s, b1]])
LogicalTableScan(table=[[s, b2]])

LogicalFilter(condition=[=($0, $3))]) 
--> OriginalPlan-equiv1
  LogicalJoin(condition=[true], joinType=[inner])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b1]])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b2]])

LogicalProject(i=[$3], j=[$4], k=[$5], i0=[$0], j0=[$1], k0=[$2]) 
--> OriginalPlan-equiv2
  RelSubset-N
{code}
 

Step-3:

During the transformation in Step-2, we will get a RelSubset-N as below:
{code:java}
RelSubset-N:
LogicalFilter(condition=[=($3, $0))]) 
--> Plan0   
  LogicalJoin(condition=[true], joinType=[inner])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b2]])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b1]])

LogicalProject(i=[$3], j=[$4], k=[$5], i0=[$0], j0=[$1], k0=[$2]) 
--> Plan1
  RelSubset-M
{code}
 

>From here, we will notice that RelSubset-M and RelSubset-N references each 
>other and form a cycle. 
 # LogicalProject in OriginalPlan-equiv2 and  LogicalFilter(condition=[=($3, 
$0)] in Plan0 is transformed by ProjectFilterTransposeRule and generate a new 
Filter in RelSubset-M
 # The LogicalProject in Plan1 and the new generated Filter in RelSubset-M will 
be matched by ProjectFilterTransposeRule again and generate another new Filter. 
 # Above two steps will alternate infinitely

 


was (Author: jinxing6...@126.com):
[~hyuan] I commented in CALCITE-3142 and gave my understanding about the root 
cause.

The root cause of this issue is different from CALCITE-3142 but similar with 
CALCITE-2223.

Yes, [~zabetak] . It's caused by cycle between RelSubsets.

The infinite loop happens by below steps.

Step-1:

The Sql and original plan is as below

 
{code:java}
select * from b1, b2
where b1.i = 10 and b2.i = 10
and b1.i = b2.i

LogicalProject(i=[$0], j=[$1], k=[$2], i0=[$3], j0=[$4], k0=[$5])
  LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])
LogicalJoin(condition=[true], joinType=[inner])
  LogicalTableScan(table=[[s, b1]])
  LogicalTableScan(table=[[s, b2]])
{code}
 

Step-2:

Thanks to FilterJoinRule, ProjectRemoveRule & JoinCommuteRule, the original 
plan can be transformed to be equivalents and collect as RelSubset-M:

 
{code:java}
RelSubset-M:
LogicalProject(i=[$0], j=[$1], k=[$2], i0=[$3], j0=[$4], k0=[$5]) 
--> OriginalPlan
 LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])
   LogicalJoin(condition=[true], joinType=[inner])
 LogicalTableScan(table=[[s, b1]])
 LogicalTableScan(table=[[s, b2]])

LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])   
--> OriginalPlan-equiv0
  LogicalJoin(condition=[true], joinType=[inner])
LogicalTableScan(table=[[s, b1]])
LogicalTableScan(table=[[s, b2]])

LogicalFilter(condition=[=($0, $3))]) 
--> OriginalPlan-equiv1
  LogicalJoin(condition=[true], joinType=[inner])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b1]])
LogicalFilter(condition=[=($0, 10)])
  

[jira] [Commented] (CALCITE-3505) Infinite matching of FilterProjectTransposeRule causes stackoverflow

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979910#comment-16979910
 ] 

Jin Xing commented on CALCITE-3505:
---

[~hyuan] I commented in CALCITE-3142 and gave my understanding about the root 
cause.

The root cause of this issue is different from CALCITE-3142 but similar with 
CALCITE-2223.

Yes, [~zabetak] . It's caused by cycle between RelSubsets.

The infinite loop happens by below steps.

Step-1:

The Sql and original plan is as below

 
{code:java}
select * from b1, b2
where b1.i = 10 and b2.i = 10
and b1.i = b2.i

LogicalProject(i=[$0], j=[$1], k=[$2], i0=[$3], j0=[$4], k0=[$5])
  LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])
LogicalJoin(condition=[true], joinType=[inner])
  LogicalTableScan(table=[[s, b1]])
  LogicalTableScan(table=[[s, b2]])
{code}
 

Step-2:

Thanks to FilterJoinRule, ProjectRemoveRule & JoinCommuteRule, the original 
plan can be transformed to be equivalents and collect as RelSubset-M:

 
{code:java}
RelSubset-M:
LogicalProject(i=[$0], j=[$1], k=[$2], i0=[$3], j0=[$4], k0=[$5]) 
--> OriginalPlan
 LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])
   LogicalJoin(condition=[true], joinType=[inner])
 LogicalTableScan(table=[[s, b1]])
 LogicalTableScan(table=[[s, b2]])

LogicalFilter(condition=[AND(=($0, 10), =($3, 10), =($0, $3))])   
--> OriginalPlan-equiv0
  LogicalJoin(condition=[true], joinType=[inner])
LogicalTableScan(table=[[s, b1]])
LogicalTableScan(table=[[s, b2]])

LogicalFilter(condition=[=($0, $3))]) 
--> OriginalPlan-equiv1
  LogicalJoin(condition=[true], joinType=[inner])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b1]])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b2]])

LogicalProject(i=[$3], j=[$4], k=[$5], i0=[$0], j0=[$1], k0=[$2]) 
--> OriginalPlan-equiv2
  RelSubset-N
{code}
 

Step-3:

During the transformation in Step-2, we will get a RelSubset-N as below:

 
{code:java}
RelSubset-N:
LogicalFilter(condition=[=($3, $0))]) 
--> Plan0   
  LogicalJoin(condition=[true], joinType=[inner])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b2]])
LogicalFilter(condition=[=($0, 10)])
  LogicalTableScan(table=[[s, b1]])

LogicalProject(i=[$3], j=[$4], k=[$5], i0=[$0], j0=[$1], k0=[$2]) 
--> Plan1
  RelSubset-M
{code}
 

>From here, we will notice that RelSubset-M and RelSubset-N references each 
>other and form a cycle. 
 # LogicalProject in OriginalPlan-equiv2 and  LogicalFilter(condition=[=($3, 
$0)] in Plan0 is transformed by ProjectFilterTransposeRule and generate a new 
Filter in RelSubset-M
 # The LogicalProject in Plan1 and the new generated Filter in RelSubset-M will 
be matched by ProjectFilterTransposeRule again and generate another new Filter. 
 # Above two steps will alternate infinitely

 

 

 

> Infinite matching of FilterProjectTransposeRule causes stackoverflow
> 
>
> Key: CALCITE-3505
> URL: https://issues.apache.org/jira/browse/CALCITE-3505
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Jin Xing
>Priority: Major
> Attachments: graphviz.svg
>
>
> Run ScannableTableTest#testProjectableFilterableTableJoin with minor change 
> to reproduce
> {code:java}
> @Test public void testProjectableFilterableTableJoin() throws Exception {
>   final StringBuilder buf = new StringBuilder();
>   final String explain = "PLAN="
>   + "EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b1]], filters=[[=($0, 10)]])\n"
>   + "  EnumerableInterpreter\n"
>   + "BindableTableScan(table=[[s, b2]], filters=[[=($0, 10)]])";
>   CalciteAssert.that()
>   .with(
> newSchema("s",
> Pair.of("b1", new BeatlesProjectableFilterableTable(buf, 
> true)),
> Pair.of("b2", new BeatlesProjectableFilterableTable(buf, 
> true
>   .query("select * from \"s\".\"b1\", \"s\".\"b2\" "
>   + "where \"s\".\"b1\".\"i\" = 10 and \"s\".\"b2\".\"i\" = 
> 10 "
>   + "and \"s\".\"b1\".\"i\" = \"s\".\"b2\".\"i\"")
>   .withHook(Hook.PLANNER, (Consumer) planner -> {
> planner.removeRule(EnumerableRules.ENUMERABLE_MERGE_JOIN_RULE);
>   })
>   .explainContains(explain);
> }
> {code}
> This test has nothing to do with ENUMERABLE_MERGE_JOIN_RULE, but if we 
> disable it with planner hook, stackoverflow happens;

[jira] [Resolved] (CALCITE-3527) Enrich tests for SQL hints in SqlHintsConverterTest

2019-11-21 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen resolved CALCITE-3527.
-
  Assignee: Danny Chen
Resolution: Fixed

Fixed in 
[cba6235|https://github.com/apache/calcite/commit/cba623512f675001dd07d1b4a22fad202a053394],
 thanks for your PR, [~icshuo] !

> Enrich tests for SQL hints in SqlHintsConverterTest
> ---
>
> Key: CALCITE-3527
> URL: https://issues.apache.org/jira/browse/CALCITE-3527
> Project: Calcite
>  Issue Type: Sub-task
>Affects Versions: 1.21.0
>Reporter: Shuo Cheng
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.22.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> * add more tests for sql hints 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CALCITE-3523) Strengthen the java doc and code of class RelHint and SqlHint

2019-11-21 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen updated CALCITE-3523:

Priority: Minor  (was: Major)

> Strengthen the java doc and code of class RelHint and SqlHint
> -
>
> Key: CALCITE-3523
> URL: https://issues.apache.org/jira/browse/CALCITE-3523
> Project: Calcite
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.21.0
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.22.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> * Add more doc the RelHint and SqlHint
> * Add validation for SqlHint



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CALCITE-3527) Enrich tests for SQL hints in SqlHintsConverterTest

2019-11-21 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen updated CALCITE-3527:

Priority: Minor  (was: Major)

> Enrich tests for SQL hints in SqlHintsConverterTest
> ---
>
> Key: CALCITE-3527
> URL: https://issues.apache.org/jira/browse/CALCITE-3527
> Project: Calcite
>  Issue Type: Sub-task
>Affects Versions: 1.21.0
>Reporter: Shuo Cheng
>Assignee: Danny Chen
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.22.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> * add more tests for sql hints 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3477) Decimal type should not be assigned from other types

2019-11-21 Thread Danny Chen (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979746#comment-16979746
 ] 

Danny Chen commented on CALCITE-3477:
-

Thanks [~donnyzone], i still think it is very about how each engine implements 
it for the DECIMAL assigned to other numeric data types. I didn‘ find any 
evidence that a decimal cannot assign to other numeric types, so i'm inclined 
to solve this problem in the query execution.

In Apache Flink, we did allow the decimal to int when we generate the code, it 
is not that hacky, because only the decimals need to handle specifically.

> Decimal type should not be assigned from other types
> 
>
> Key: CALCITE-3477
> URL: https://issues.apache.org/jira/browse/CALCITE-3477
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Kirils Mensikovs
>Assignee: Feng Zhu
>Priority: Major
>  Labels: geospatial, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Geospatial function with integer parameter fails. The expected behavior is to 
> cast automatically all number values to BigDecimal.
> {{Example: 'select  ST_MAKEPOINT(1.0, 1)'}}
> Return:
> {code:java}
> Error: Error while executing SQL "select  ST_MAKEPOINT(1.0, 1)": Error while 
> compiling generated Java code:
> public org.apache.calcite.linq4j.Enumerable bind(final 
> org.apache.calcite.DataContext root) {
>   final org.apache.calcite.linq4j.Enumerable _inputEnumerable = 
> org.apache.calcite.linq4j.Linq4j.asEnumerable(new Integer[] {
>     0});
>   return new org.apache.calcite.linq4j.AbstractEnumerable(){
>       public org.apache.calcite.linq4j.Enumerator enumerator() {
>         return new org.apache.calcite.linq4j.Enumerator(){
>             public final org.apache.calcite.linq4j.Enumerator inputEnumerator 
> = _inputEnumerable.enumerator();
>             public void reset() {
>               inputEnumerator.reset();
>             }
>             public boolean moveNext() {
>               return inputEnumerator.moveNext();
>             }
>             public void close() {
>               inputEnumerator.close();
>             }
>             public Object current() {
>               final java.math.BigDecimal v = 
> $L4J$C$new_java_math_BigDecimal_1_0_;
>               return org.apache.calcite.runtime.GeoFunctions.ST_MakePoint(v, 
> 1);
>             }
>             static final java.math.BigDecimal 
> $L4J$C$new_java_math_BigDecimal_1_0_ = new java.math.BigDecimal(
>               "1.0");
>           };
>       }
>     };
> }
> public Class getElementType() {
>   return org.apache.calcite.runtime.GeoFunctions.Geom.class;
> } (state=,code=0)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3525) RexSimplify: eliminate redundant rex calls in OR

2019-11-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979634#comment-16979634
 ] 

Julian Hyde commented on CALCITE-3525:
--

Checkstyle won't bite you. But our coding standard is not defined by checkstyle.

> RexSimplify: eliminate redundant rex calls in OR
> 
>
> Key: CALCITE-3525
> URL: https://issues.apache.org/jira/browse/CALCITE-3525
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Igor Guzenko
>Assignee: Igor Guzenko
>Priority: Major
>
> Sample case to reproduce in {code}RexProgramTest.simplifyOrTerms{code}: 
> {code:java}
> // (a=1 or a=2 or (arr[1]>4 and arr[1]<3 and a=3)) => a=1 or a=2
> final RelDataType intArrayType = typeFactory.createArrayType(intType, -1);
> final RexInputRef ref0 = rexBuilder.makeInputRef(intType, 0);
> final RexInputRef ref3 = rexBuilder.makeInputRef(intArrayType, 3);
> final RexCall itm1 = (RexCall) rexBuilder.makeCall(intType, 
> SqlStdOperatorTable.ITEM,
> ImmutableList.of(ref3, literal1));
> simplify = this.simplify.withParanoid(false);
> checkSimplifyFilter(
> or(
> eq(ref0, literal1),
> eq(ref0, literal2),
> and(
> gt(itm1, literal4),
> lt(itm1, literal3),
> eq(ref0, literal3)
> )
> ),
> "OR(=($0, 1), =($0, 2))"
> );
> simplify = simplify.withParanoid(true);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CALCITE-3510) Redis adapter

2019-11-21 Thread Julian Hyde (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Hyde updated CALCITE-3510:
-
Summary: Redis adapter  (was: Implement  Redis adapter)

> Redis adapter
> -
>
> Key: CALCITE-3510
> URL: https://issues.apache.org/jira/browse/CALCITE-3510
> Project: Calcite
>  Issue Type: New Feature
>Reporter: Forward Xu
>Assignee: Forward Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The Redis adapter allows querying of live data stored in Redis.Each Redis 
> key/value pair is presented as a single row in Presto. Rows can be broken 
> down into cells by using table definition files.
>  Redis `string` ,`hash`, `sets`, `zsets`, `list` value types are supported;
> CSV format data
> {code:java}
> Set hello_world_1 1, james, 10
> Set hello_world_2 2,bond,20
> Set hello_world_3 3,lily,30
> Set hello_world_4 4,lucy,20
> {code}
> JSON format data
> {code:java}
> Set hello_foo_1 '{"id":1,"name":"james","age":110}'
> Set hello_foo_2 '{"id": 2, "name": "bond", "age": 210}'
> Set hello_foo_3 '{"id": 3, "name": "lily", "age": 310}'
> Set hello_foo_4 '{"id": 3, "name": "lucy", "age": 210}'
> {code}
> RAW format data
> {code:java}
> Set hello_raw_1 1, james, 10
> Set hello_raw_2 2,bond,20
> Set hello_raw_3 3,lily,30
> Set hello_raw_4 4, lucy, 20
> {code}
> We inserted data in three formats, one in CSV format, one in JSON format, and 
> one in RAW format. This is the three formats we currently support, which will 
> be demonstrated separately.
> Then you can define the corresponding mapping table in the JSON file:
> {code:java}
> {
>   "version": "1.0",
>   "defaultSchema": "foodmart",
>   "schemas": [
> {
>   "type": "custom",
>   "name": "foodmart",
>   "factory": "org.apache.calcite.adapter.redis.RedisSchemaFactory",
>   "operand": {
>   ```
> "host": "localhost",
> "port": 6379,
> "database": 0,
> "password": ""
> ```
>   },
>   "tables": [
> {
>   "name": "raw_01",
>   "factory": "org.apache.calcite.adapter.redis.RedisTableFactory",
>   "operand": {
>   ```
> "dataFormat": "csv",
> "keyDelimiter": ":",
> "fields": [
>   {
> "name": "id",
> "type": "varchar",
> "mapping": "id"
>   }
>   ```
> ]
>   }
> }
>   ]
> }
>   ]
> }
> {code}
> Here are a few details about the fields:
> keyDelimiter is used to split the value, the default is a colon, and the 
> split value is used to map the field column. Only works for the CSV format.
> Format is used to specify the format of the data in Redis. Currently, it 
> supports: CSV, JSON and RAW. The raw format keeps the original redis key and 
> value intact and only one field key is used for the query. The details are 
> not described below.
> The function of COLUMN_MAPPING is to map the columns of Redis to the 
> underlying data. Since there is no concept of column under the Redis, the 
> specific mapping method varies according to the format. For example, here 
> CSV, we know that the CSV data will be formed after being parsed. A string 
> array, the corresponding column_mapping is mapped to the index (subscript) of 
> the underlying array. For example, here map id to subscript 2, map name to 
> subscript 1 and so on.
> You can query the data in the Redis database:
> Mysql> select * from dla_person;
> ||name||id||age||
> |bond|20|2|
> |lily|30|3|
> |lucy|20|4|
> |james|10|1|
> 4 rows in set (0.18 sec)
>  Students who are familiar with SQL must feel very cool, you can go to the 
> familiar SQL syntax to operate the Redis database.
> JSON
>  The above demonstrates the data in CSV format. Let's try the data in JSON 
> format. Let's create a new table:
> {code:java}
> {
>   "version": "1.0",
>   "defaultSchema": "foodmart",
>   "schemas": [
> {
>   "type": "custom",
>   "name": "foodmart",
>   "factory": "org.apache.calcite.adapter.redis.RedisSchemaFactory",
>   "operand": {
>   ```
> "host": "localhost",
> "port": 6379,
> "database": 0,
> "password": ""
> ```
>   },
>   "tables": [
> {
>   "name": "raw_01",
>   "factory": "org.apache.calcite.adapter.redis.RedisTableFactory",
>   "operand": {
>   ```
> "dataFormat": "json",
> "fields": [
>   {
> "name": "id",
> "type": "varchar",
> "mapping": "id"
>   }
>   ```
> ]
>   }
> }
>   ]
> }
>   ]
> }
> {code}
> Note that we have specified 

[jira] [Commented] (CALCITE-3339) DESCRIPTOR as a SQL operator in SqlStdOperatorTable

2019-11-21 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979607#comment-16979607
 ] 

Rui Wang commented on CALCITE-3339:
---

Thanks [~julianhyde] 

I will carry on the change of DESCRIPTOR  to TUMBLE 
branch([https://github.com/apache/calcite/pull/1587]) and address your comments 
there. It will make #1587 larger but we can have all reviews in the same place.

> DESCRIPTOR as a SQL operator in SqlStdOperatorTable
> ---
>
> Key: CALCITE-3339
> URL: https://issues.apache.org/jira/browse/CALCITE-3339
> Project: Calcite
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For query:
> SELECT * 
> FROM TABLE(TUMBLE_TVF(
> TABLE ORDERS,
> DESCRIPTOR(ROWTIME), 
> INTERVAL '10' MINUTE))
> TABLE ORDERS is converted to SqlPrefixOperator, but DESCRIPTOR(ROWTIME) has 
> no mapping in SqlStdOperatorTable. 
> There are two options:
> 1. There is a SqlColumnListConstructor which serves the same(similar) purpose 
> to specific a list of column. 
> 2. We create a new operator for DESCRIPTOR.
> Reuse existing code is always good so we can start from option one and see if 
> it works.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (CALCITE-3339) DESCRIPTOR as a SQL operator in SqlStdOperatorTable

2019-11-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979561#comment-16979561
 ] 

Julian Hyde edited comment on CALCITE-3339 at 11/21/19 7:37 PM:


Reviewing your [PR 1599|https://github.com/apache/calcite/pull/1599]:
 * It would be good to isolate 'Add DESCRIPTOR' in its own change, including 
the DESCRIPTOR operator in SqlStdOperatorTable, but not linked to the TUMBLE 
function.
 * Add a test to SqlParserTest (yes, I know that you accomplished this without 
changing the parser, and that's great, but it impinges on the parser's area of 
functionality).
 * Needs more (user and developer) documentation about what a descriptor is.
 * The return type of the DESCRIPTOR function should be something special. I 
think SqlTypeName.COLUMN_LIST will do.
 * Also add a validator test that calls a UDF with a DESCRIPTOR argument. You 
may need a special type that identifies an argument of a UDF as a descriptor; 
if so, add class org.apache.calcite.runtime.Descriptor.
 * It's sufficient for this change to do parsing and validation. Don't need to 
do SqlToRel or execution.

Also, it's fine if you carry on developing this in the same branch as TUMBLE, 
if you wish. We can re-order and re-organize the commits later. 

 


was (Author: julianhyde):
Reviewing your [PR 1599|https://github.com/apache/calcite/pull/1599]:
 * It would be good to isolate 'Add DESCRIPTOR' in its own change, including 
the DESCRIPTOR operator in SqlStdOperatorTable, but not linked to the TUMBLE 
function.
 * Add a test to SqlParserTest (yes, I know that you accomplished this without 
changing the parser, and that's great, but it impinges on the parser's area of 
functionality).
 * Needs more (user and developer) documentation about what a descriptor is.
 * The return type of the DESCRIPTOR function should be something special. I 
think SqlTypeName.COLUMN_LIST will do.
 * Also add a validator test that calls a UDF with a DESCRIPTOR argument. You 
may need a special type that identifies an argument of a UDF as a descriptor; 
if so, add class org.apache.calcite.runtime.Descriptor.
 * It's sufficient for this change to do parsing and validation. Don't need to 
do SqlToRel or execution.

 

 

> DESCRIPTOR as a SQL operator in SqlStdOperatorTable
> ---
>
> Key: CALCITE-3339
> URL: https://issues.apache.org/jira/browse/CALCITE-3339
> Project: Calcite
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For query:
> SELECT * 
> FROM TABLE(TUMBLE_TVF(
> TABLE ORDERS,
> DESCRIPTOR(ROWTIME), 
> INTERVAL '10' MINUTE))
> TABLE ORDERS is converted to SqlPrefixOperator, but DESCRIPTOR(ROWTIME) has 
> no mapping in SqlStdOperatorTable. 
> There are two options:
> 1. There is a SqlColumnListConstructor which serves the same(similar) purpose 
> to specific a list of column. 
> 2. We create a new operator for DESCRIPTOR.
> Reuse existing code is always good so we can start from option one and see if 
> it works.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3339) DESCRIPTOR as a SQL operator in SqlStdOperatorTable

2019-11-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979561#comment-16979561
 ] 

Julian Hyde commented on CALCITE-3339:
--

Reviewing your [PR 1599|https://github.com/apache/calcite/pull/1599]:
 * It would be good to isolate 'Add DESCRIPTOR' in its own change, including 
the DESCRIPTOR operator in SqlStdOperatorTable, but not linked to the TUMBLE 
function.
 * Add a test to SqlParserTest (yes, I know that you accomplished this without 
changing the parser, and that's great, but it impinges on the parser's area of 
functionality).
 * Needs more (user and developer) documentation about what a descriptor is.
 * The return type of the DESCRIPTOR function should be something special. I 
think SqlTypeName.COLUMN_LIST will do.
 * Also add a validator test that calls a UDF with a DESCRIPTOR argument. You 
may need a special type that identifies an argument of a UDF as a descriptor; 
if so, add class org.apache.calcite.runtime.Descriptor.
 * It's sufficient for this change to do parsing and validation. Don't need to 
do SqlToRel or execution.

 

 

> DESCRIPTOR as a SQL operator in SqlStdOperatorTable
> ---
>
> Key: CALCITE-3339
> URL: https://issues.apache.org/jira/browse/CALCITE-3339
> Project: Calcite
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For query:
> SELECT * 
> FROM TABLE(TUMBLE_TVF(
> TABLE ORDERS,
> DESCRIPTOR(ROWTIME), 
> INTERVAL '10' MINUTE))
> TABLE ORDERS is converted to SqlPrefixOperator, but DESCRIPTOR(ROWTIME) has 
> no mapping in SqlStdOperatorTable. 
> There are two options:
> 1. There is a SqlColumnListConstructor which serves the same(similar) purpose 
> to specific a list of column. 
> 2. We create a new operator for DESCRIPTOR.
> Reuse existing code is always good so we can start from option one and see if 
> it works.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3339) DESCRIPTOR as a SQL operator in SqlStdOperatorTable

2019-11-21 Thread Julian Hyde (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979533#comment-16979533
 ] 

Julian Hyde commented on CALCITE-3339:
--

I've given it a little more thought. We can't possibly know the lexical 
convention of the back-end (e.g. whether it uses back-tick or brackets to quote 
identifiers) and so we can't replicate the behavior of the back-end parser. 
Therefore the identifier should be in the lexical convention of the rest of the 
enclosing SQL statement. And that includes the case-sensitivity of unquoted 
identifiers.

That said, it doesn't matter very much. In v1 we should just do what is easiest 
(with the fewest lines of code). We can revise later.

> DESCRIPTOR as a SQL operator in SqlStdOperatorTable
> ---
>
> Key: CALCITE-3339
> URL: https://issues.apache.org/jira/browse/CALCITE-3339
> Project: Calcite
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For query:
> SELECT * 
> FROM TABLE(TUMBLE_TVF(
> TABLE ORDERS,
> DESCRIPTOR(ROWTIME), 
> INTERVAL '10' MINUTE))
> TABLE ORDERS is converted to SqlPrefixOperator, but DESCRIPTOR(ROWTIME) has 
> no mapping in SqlStdOperatorTable. 
> There are two options:
> 1. There is a SqlColumnListConstructor which serves the same(similar) purpose 
> to specific a list of column. 
> 2. We create a new operator for DESCRIPTOR.
> Reuse existing code is always good so we can start from option one and see if 
> it works.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3339) DESCRIPTOR as a SQL operator in SqlStdOperatorTable

2019-11-21 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979515#comment-16979515
 ] 

Rui Wang commented on CALCITE-3339:
---

I replied to the thread in dev@ but still copy that response to this JIRA to 
make [~julianhyde] can continue discussion on his preferred way. I monitor both 
email and JIRA so I am ok with either way to continue our discussion.

Copied response from the email:


{code:java}
I didn't find such discussion in the SQL standard (maybe I have missed 
something). 

My current thought is not to convert "rowtime" to upper-case is the best:
1. for those not column name case-sensitive database, it works.
2. for those case-sensitive database, assume users are aware of their sources 
that are case-sensitive, I think their intention to use descriptor will 
consider that factor. Converting column names to upper-case causes confusion.


-Rui  
{code}
   

> DESCRIPTOR as a SQL operator in SqlStdOperatorTable
> ---
>
> Key: CALCITE-3339
> URL: https://issues.apache.org/jira/browse/CALCITE-3339
> Project: Calcite
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For query:
> SELECT * 
> FROM TABLE(TUMBLE_TVF(
> TABLE ORDERS,
> DESCRIPTOR(ROWTIME), 
> INTERVAL '10' MINUTE))
> TABLE ORDERS is converted to SqlPrefixOperator, but DESCRIPTOR(ROWTIME) has 
> no mapping in SqlStdOperatorTable. 
> There are two options:
> 1. There is a SqlColumnListConstructor which serves the same(similar) purpose 
> to specific a list of column. 
> 2. We create a new operator for DESCRIPTOR.
> Reuse existing code is always good so we can start from option one and see if 
> it works.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3528) TIMESTAMPADD/TIMESTAMPDIFF can't handle unit of MILLISECOND

2019-11-21 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979479#comment-16979479
 ] 

Rui Wang commented on CALCITE-3528:
---

I am thinking it was just an implementation decision (e.g. didn't implement 
that). Some other systems supports unit of millisecond. For example, see 
BigQuery's doc: 
https://cloud.google.com/bigquery/docs/reference/standard-sql/timestamp_functions#timestamp_diff

> TIMESTAMPADD/TIMESTAMPDIFF can't handle unit of MILLISECOND
> ---
>
> Key: CALCITE-3528
> URL: https://issues.apache.org/jira/browse/CALCITE-3528
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.21.0
>Reporter: Zhenghua Gao
>Priority: Minor
>
> Calcite's SQL parser can't handle unit of MILLISECOND for 
> TIMESTAMPADD/TIMESTAMPDIFF functions. And I checked MySql[1], which can't 
> handle unit of MILLISECOND too. 
> Is there any reason to not support unit of MILLISECOND of the two function?
>  [1] 
> [https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_timestampadd]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3530) TIMESTAMPADD/TIMESTAMPDIFF with microsecond/nanosecond unit lost precision

2019-11-21 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979477#comment-16979477
 ] 

Rui Wang commented on CALCITE-3530:
---

If need to make Calcite SqlFunction supports beyond millisecond, we will have 
to discuss how to pass around values with micros or nanos. A long won't be 
enough and it might be a (long, int). To support this kind of format, as to 
function parameters, it's relative easy.  But I didn't figure it our how to 
return (long int) from functions.

> TIMESTAMPADD/TIMESTAMPDIFF with microsecond/nanosecond unit lost precision
> --
>
> Key: CALCITE-3530
> URL: https://issues.apache.org/jira/browse/CALCITE-3530
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Reporter: Zhenghua Gao
>Priority: Minor
>
> Since the TimestampAddConvertlet and TimestampDiffConvertlet  tread TIMESTAMP 
> as long (with millisecond precision), they lost precision even if the 
> downstream can support microsecond or nanosecond. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3529) TIMESTAMPDIFF function may overflow when handle unit of MICROSECOND/NANOSECOND

2019-11-21 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979476#comment-16979476
 ] 

Rui Wang commented on CALCITE-3529:
---

I had similar problem in BeamSQL, and in Beam community we had several 
discussions there. In Java world, due to wide-used Joda library, usually 
"timestamp" only supports up to millisecond precision. In order to support 
beyond millis, essentially we need extra storage to save sub-millis, which 
might be just another int.

Back to Calcite sql function implementation. I believe currently relevant 
implementation converts "timestamp" to long and assume it's millisecond. 

> TIMESTAMPDIFF function may overflow when handle unit of MICROSECOND/NANOSECOND
> --
>
> Key: CALCITE-3529
> URL: https://issues.apache.org/jira/browse/CALCITE-3529
> Project: Calcite
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.21.0
>Reporter: Zhenghua Gao
>Priority: Major
>
> For unit of NANOSECOND, the TIMESTAMPDIFF function returns a BIGINT, which 
> may overflow since BIGINT can't cover the huge range of nanosecond. 
> For unit of MICROSECOND, the TIMESTAMPDIFF function returns a INTEGER, which 
> may overflow too.
> TIMESTAMPDIFF(MICROSECOND, TIMESTAMP '-12-31 23:59:59.999', TIMESTAMP 
> '0001-01-01 00:00:00.000') should returns '31553789759000' [1], calcite 
> returns '879764032'. 
>  
> TIMESTAMPDIFF(NANOSECOND, TIMESTAMP '-12-31 23:59:59.999', TIMESTAMP 
> '0001-01-01 00:00:00.000') should returns '31553789759000*000*' which is 
> bigger than LONG.MAX_VALUE, calcite returns '-1943248345937622528'.
>  
> My suggestion is:
>  # Change the return type to BIGINT for unit of MICROSECOND
>  # Disable support for unit of NANOSECOND or give a RISK message in the 
> document of TIMESTAMPDIFF
> [1] the value is calculated by MySQL 5.6 from [http://sqlfiddle.com/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CALCITE-3530) TIMESTAMPADD/TIMESTAMPDIFF with microsecond/nanosecond unit lost precision

2019-11-21 Thread Zhenghua Gao (Jira)
Zhenghua Gao created CALCITE-3530:
-

 Summary: TIMESTAMPADD/TIMESTAMPDIFF with microsecond/nanosecond 
unit lost precision
 Key: CALCITE-3530
 URL: https://issues.apache.org/jira/browse/CALCITE-3530
 Project: Calcite
  Issue Type: Bug
  Components: core
Reporter: Zhenghua Gao


Since the TimestampAddConvertlet and TimestampDiffConvertlet  tread TIMESTAMP 
as long (with millisecond precision), they lost precision even if the 
downstream can support microsecond or nanosecond. 
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CALCITE-3527) Enrich tests for SQL hints in SqlHintsConverterTest

2019-11-21 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen updated CALCITE-3527:

Summary: Enrich tests for SQL hints in SqlHintsConverterTest  (was: enrich 
tests for sql hints.)

> Enrich tests for SQL hints in SqlHintsConverterTest
> ---
>
> Key: CALCITE-3527
> URL: https://issues.apache.org/jira/browse/CALCITE-3527
> Project: Calcite
>  Issue Type: Sub-task
>Affects Versions: 1.21.0
>Reporter: Shuo Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.22.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * add more tests for sql hints 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3527) enrich tests for sql hints.

2019-11-21 Thread Danny Chen (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979157#comment-16979157
 ] 

Danny Chen commented on CALCITE-3527:
-

Thanks for the contributing ~ [~icshuo], would take some time to review soon ~

> enrich tests for sql hints.
> ---
>
> Key: CALCITE-3527
> URL: https://issues.apache.org/jira/browse/CALCITE-3527
> Project: Calcite
>  Issue Type: Sub-task
>Affects Versions: 1.21.0
>Reporter: Shuo Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.22.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * add more tests for sql hints 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (CALCITE-3527) enrich tests for sql hints.

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/CALCITE-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated CALCITE-3527:

Labels: pull-request-available  (was: )

> enrich tests for sql hints.
> ---
>
> Key: CALCITE-3527
> URL: https://issues.apache.org/jira/browse/CALCITE-3527
> Project: Calcite
>  Issue Type: Sub-task
>Affects Versions: 1.21.0
>Reporter: Shuo Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.22.0
>
>
> * add more tests for sql hints 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3124) Infinite rule matching when AggregateRemoveRule is enabled for SUM0

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979126#comment-16979126
 ] 

Jin Xing commented on CALCITE-3124:
---

If we look at the above process, we will find that there's no RelSubset cycle. 
The root cause is that FilterProjectTransposeRule and 
ProjectFilterTransposeRule don't have exactly opposite functionality, i.e. 
applying FilterProjectTransposeRule firstly and then 
ProjectFilterTransposeRule, the transformed plan is not exactly the same as 
before.

Applying FilterProjectTransposeRule may increase the complexity of filtering 
condition, but applying ProjectFilterTransposeRule will not reduce the 
complexity.

An intuitive approach to fix in ProjectFilterTransposeRule might be as below:
 # Create an RexShuttle by projects;
 # Express the filtering condition by the shuttle;
 # Push the Project down under the Filter

> Infinite rule matching when AggregateRemoveRule is enabled for SUM0
> ---
>
> Key: CALCITE-3124
> URL: https://issues.apache.org/jira/browse/CALCITE-3124
> Project: Calcite
>  Issue Type: Bug
>Reporter: Haisheng Yuan
>Assignee: Forward Xu
>Priority: Major
>
> Make the following changes (uncomment return clause) to AggregateRemoveRule, 
> the test case {{JdbcTest.testHavingNot2}} won't complete due to infinite rule 
> matching.
> {noformat}
> --- a/core/src/main/java/org/apache/calcite/rel/rules/AggregateRemoveRule.java
> +++ b/core/src/main/java/org/apache/calcite/rel/rules/AggregateRemoveRule.java
> @@ -102,7 +102,7 @@ public void onMatch(RelOptRuleCall call) {
>  if (aggregation.getKind() == SqlKind.SUM0) {
>  // Bail out for SUM0 to avoid potential infinite rule matching,
>  // because it may be generated by transforming SUM aggregate
>  // function to SUM0 and COUNT.
> -return;
> +//return;
>}
>final SqlSplittableAggFunction splitter =
>Objects.requireNonNull(
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (CALCITE-3124) Infinite rule matching when AggregateRemoveRule is enabled for SUM0

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979110#comment-16979110
 ] 

Jin Xing edited comment on CALCITE-3124 at 11/21/19 9:17 AM:
-

Sql in testHavingNot2 and its corresponding plan are as below:
{code:java}
LogicalProject(EXPR$0=[1])
  LogicalFilter(condition=[>=($1, 2)])  
> Filter0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])  
> Aggregate0
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])
 {code}
The infinite loop happens by below steps:

Step-1

Aggregate0 can be transformed  to below equivalent plans and collect into 
RelSubset-A 
{code:java}
RelSubset-A:

1. Aggregate0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])

2. Aggregate0-equiv0 (transformed by AggregateReduceFunctionsRule, 
AggregateRemoveRule, ProjectMergeRule)
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
  EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]])

3. Aggregate0-equiv1 (transformed by AggregateProjectMergeRule, 
AggregateRemoveRule)
LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
  EnumerableTableScan(table=[[foodmart2, store]])

{code}
 

Step-2

RelSubset-A is child node of Filter0 and FilterProjectTransposeRule will 
transform Filter0 with Aggregate0-equiv0 as below:
{code:java}
From:
LogicalFilter(condition=[>=($1, 2)])
  LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
EnumerableTableScan(table=[[foodmart2, store]])
To:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))]) 
-->Filter0-equiv
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($16), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($16, 0)), 2)])
EnumerableTableScan(table=[[foodmart2, store]]){code}
 

Step-3

ProjectFilterTransposeRule will transform Filter0-equiv as below: 
{code:java}
From:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))]) 
-->Filter0-equiv
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($16), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($16, 0)), 2)])
EnumerableTableScan(table=[[foodmart2, store]])
To:
LogicalProject(store_street_address=[$0], $f1=[CASE(=(CASE(IS NOT NULL($1), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($1, 0))])
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($1), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($1, 0)), 2)])   
-> FilterX
LogicalProject([$5], [$16])  ---> ProjectX
  EnumerableTableScan(table=[[foodmart2, store]]){code}
We will find ProjectX exactly exists in RelSubset-A as Aggregate0-equiv1, thus 
FilterProjectTransposeRule will transform FilterX with Aggregate0-equiv0 as 
below:
{code:java}
From:
LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($1), 1:BIGINT, 0:BIGINT), 
0), null:INTEGER, COALESCE($1, 0)), 2)])
  LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]])

To:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL(CASE(=(CASE(IS NOT 
NULL($16), 1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($1, 0)), 2)])
EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]]){code}
As we can see, a more complex LogicalFilter is generated and then step-3  will 
happen again. And then step-2 & step-3 will alternate infinitely.

 

 

 

 


was (Author: jinxing6...@126.com):
Sql in testHavingNot2 and its corresponding plan are as below:
{code:java}
LogicalProject(EXPR$0=[1])
  LogicalFilter(condition=[>=($1, 2)])  
> Filter0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])  
> Aggregate0
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])
 {code}
The infinite loop happens by below steps:

Step-1

Aggregate0 can be 

[jira] [Comment Edited] (CALCITE-3124) Infinite rule matching when AggregateRemoveRule is enabled for SUM0

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979110#comment-16979110
 ] 

Jin Xing edited comment on CALCITE-3124 at 11/21/19 9:17 AM:
-

Sql in testHavingNot2 and its corresponding plan are as below:
{code:java}
LogicalProject(EXPR$0=[1])
  LogicalFilter(condition=[>=($1, 2)])  
> Filter0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])  
> Aggregate0
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])
 {code}
The infinite loop happens by below steps:

Step-1

Aggregate0 can be transformed  to below equivalent plans and collect into 
RelSubset-A 
{code:java}
RelSubset-A:

1. Aggregate0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])

2. Aggregate0-equiv0 (transformed by AggregateReduceFunctionsRule, 
AggregateRemoveRule, ProjectMergeRule)
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
  EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]])

3. Aggregate0-equiv1 (transformed by AggregateProjectMergeRule, 
AggregateRemoveRule)
LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
  EnumerableTableScan(table=[[foodmart2, store]])

{code}
 

Step-2

RelSubset-A is child node of Filter0 and FilterProjectTransposeRule will 
transform Filter0 with Aggregate0-equiv0 as below:
{code:java}
From:
LogicalFilter(condition=[>=($1, 2)])
  LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
EnumerableTableScan(table=[[foodmart2, store]])
To:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))]) 
-->Filter0-equiv
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($16), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($16, 0)), 2)])
EnumerableTableScan(table=[[foodmart2, store]]){code}
 

Step-3

ProjectFilterTransposeRule will transform Filter0-equiv as below: 
{code:java}
From:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))]) 
-->Filter0-equiv
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($16), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($16, 0)), 2)])
EnumerableTableScan(table=[[foodmart2, store]])
To:
LogicalProject(store_street_address=[$0], $f1=[CASE(=(CASE(IS NOT NULL($1), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($1, 0))])
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($1), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($1, 0)), 2)])   
-> FilterX
LogicalProject([$5], [$16])  ---> ProjectX
  EnumerableTableScan(table=[[foodmart2, store]]){code}
We will find ProjectX exactly exists in RelSubset-A as Aggregate0-equiv1, thus 
FilterProjectTransposeRule will transform FilterX with Aggregate0-equiv0 as 
below:
{code:java}
From:
LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($1), 1:BIGINT, 0:BIGINT), 
0), null:INTEGER, COALESCE($1, 0)), 2)])
  LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]])

To:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL(CASE(=(CASE(IS NOT 
NULL($16), 1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($1, 0)), 2)])
EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]]){code}
As we can see, a more complex LogicalFilter is generated and then step-3  will 
happen again. And then step-2 & step-3 will alternate infinitely. 

 


was (Author: jinxing6...@126.com):
Sql in testHavingNot2 and its corresponding plan are as below:
{code:java}
LogicalProject(EXPR$0=[1])
  LogicalFilter(condition=[>=($1, 2)])  
> Filter0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])  
> Aggregate0
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])
 {code}
The infinite loop happens by below steps:

Step-1

Aggregate0 can be transformed  

[jira] [Created] (CALCITE-3529) TIMESTAMPDIFF function may overflow when handle unit of MICROSECOND/NANOSECOND

2019-11-21 Thread Zhenghua Gao (Jira)
Zhenghua Gao created CALCITE-3529:
-

 Summary: TIMESTAMPDIFF function may overflow when handle unit of 
MICROSECOND/NANOSECOND
 Key: CALCITE-3529
 URL: https://issues.apache.org/jira/browse/CALCITE-3529
 Project: Calcite
  Issue Type: Bug
  Components: core
Affects Versions: 1.21.0
Reporter: Zhenghua Gao


For unit of NANOSECOND, the TIMESTAMPDIFF function returns a BIGINT, which may 
overflow since BIGINT can't cover the huge range of nanosecond. 

For unit of MICROSECOND, the TIMESTAMPDIFF function returns a INTEGER, which 
may overflow too.

TIMESTAMPDIFF(MICROSECOND, TIMESTAMP '-12-31 23:59:59.999', TIMESTAMP 
'0001-01-01 00:00:00.000') should returns '31553789759000' [1], calcite 
returns '879764032'. 

 

TIMESTAMPDIFF(NANOSECOND, TIMESTAMP '-12-31 23:59:59.999', TIMESTAMP 
'0001-01-01 00:00:00.000') should returns '31553789759000*000*' which is 
bigger than LONG.MAX_VALUE, calcite returns '-1943248345937622528'.

 

My suggestion is:
 # Change the return type to BIGINT for unit of MICROSECOND
 # Disable support for unit of NANOSECOND or give a RISK message in the 
document of TIMESTAMPDIFF

[1] the value is calculated by MySQL 5.6 from [http://sqlfiddle.com/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (CALCITE-3124) Infinite rule matching when AggregateRemoveRule is enabled for SUM0

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979110#comment-16979110
 ] 

Jin Xing edited comment on CALCITE-3124 at 11/21/19 9:09 AM:
-

Sql in testHavingNot2 and its corresponding plan are as below:
{code:java}
LogicalProject(EXPR$0=[1])
  LogicalFilter(condition=[>=($1, 2)])  
> Filter0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])  
> Aggregate0
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])
 {code}
The infinite loop happens by below steps:

Step-1

Aggregate0 can be transformed  to below equivalent plans and collect into 
RelSubset-A 
{code:java}
RelSubset-A:

1. Aggregate0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])

2. Aggregate0-equiv0 (transformed by AggregateReduceFunctionsRule, 
AggregateRemoveRule, ProjectMergeRule)
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
  EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]])

3. Aggregate0-equiv1 (transformed by AggregateProjectMergeRule, 
AggregateRemoveRule)
LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
  EnumerableTableScan(table=[[foodmart2, store]])

{code}
 

Step-2

RelSubset-A is child node of Filter0 and FilterProjectTransposeRule will 
transform Filter0 with Aggregate0-equiv0 as below:
{code:java}
From:
LogicalFilter(condition=[>=($1, 2)])
  LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
EnumerableTableScan(table=[[foodmart2, store]])
To:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))]) 
-->Filter0-equiv
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($16), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($16, 0)), 2)])
EnumerableTableScan(table=[[foodmart2, store]]){code}
 

Step-3

ProjectFilterTransposeRule will transform Filter0-equiv as below: 
{code:java}
From:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))]) 
-->Filter0-equiv
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($16), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($16, 0)), 2)])
EnumerableTableScan(table=[[foodmart2, store]])
To:
LogicalProject(store_street_address=[$0], $f1=[CASE(=(CASE(IS NOT NULL($1), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($1, 0))])
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($1), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($1, 0)), 2)])   
-> FilterX
LogicalProject([$5], [$16])  ---> ProjectX
  EnumerableTableScan(table=[[foodmart2, store]]){code}
We will find ProjectX exactly exists in RelSubset-A as Aggregate0-equiv1, thus 
FilterProjectTransposeRule will transform FilterX with Aggregate0-equiv0 as 
below:
{code:java}
From:
LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($1), 1:BIGINT, 0:BIGINT), 
0), null:INTEGER, COALESCE($1, 0)), 2)])
  LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]])

To:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL(CASE(=(CASE(IS NOT 
NULL($16), 1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($1, 0)), 2)])
EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]]){code}
As we can see, a more complex LogicalFilter is generated and step-2 & step-3  
alternate infinitely.

 

 

 

 


was (Author: jinxing6...@126.com):
Sql in testHavingNot2 and its corresponding plan are as below:

 
{code:java}
LogicalProject(EXPR$0=[1])
  LogicalFilter(condition=[>=($1, 2)])  
> Filter0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])  
> Aggregate0
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])
{code}
 

 

The infinite loop happens by below steps:

Step-1

Aggregate0 can be transformed  to below equivalent plans and 

[jira] [Commented] (CALCITE-3124) Infinite rule matching when AggregateRemoveRule is enabled for SUM0

2019-11-21 Thread Jin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979110#comment-16979110
 ] 

Jin Xing commented on CALCITE-3124:
---

Sql in testHavingNot2 and its corresponding plan are as below:

 
{code:java}
LogicalProject(EXPR$0=[1])
  LogicalFilter(condition=[>=($1, 2)])  
> Filter0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])  
> Aggregate0
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])
{code}
 

 

The infinite loop happens by below steps:

Step-1

Aggregate0 can be transformed  to below equivalent plans and collect into 
RelSubset-A

 
{code:java}
RelSubset-A:

1. Aggregate0
LogicalAggregate(group=[{0}], agg#0=[SUM($1)])
  LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
EnumerableTableScan(table=[[foodmart2, store]])

2. Aggregate0-equiv0 (transformed by AggregateReduceFunctionsRule, 
AggregateRemoveRule, ProjectMergeRule)
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
  EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]])

3. Aggregate0-equiv1 (transformed by AggregateProjectMergeRule, 
AggregateRemoveRule)
LogicalProject(store_street_address=[$5], grocery_sqft=[$16])
  EnumerableTableScan(table=[[foodmart2, store]])

{code}
 

 

Step-2

RelSubset-A is child node of Filter0 and FilterProjectTransposeRule will 
transform Filter0 with Aggregate0-equiv0 as below:

 
{code:java}
From:
LogicalFilter(condition=[>=($1, 2)])
  LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
EnumerableTableScan(table=[[foodmart2, store]])
To:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))]) 
-->Filter0-equiv
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($16), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($16, 0)), 2)])
EnumerableTableScan(table=[[foodmart2, store]]){code}
 

 

Step-3

ProjectFilterTransposeRule will transform Filter0-equiv as below:

 
{code:java}
From:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))]) 
-->Filter0-equiv
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($16), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($16, 0)), 2)])
EnumerableTableScan(table=[[foodmart2, store]])
To:
LogicalProject(store_street_address=[$0], $f1=[CASE(=(CASE(IS NOT NULL($1), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($1, 0))])
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($1), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($1, 0)), 2)])   
-> FilterX
LogicalProject([$5], [$16])  ---> ProjectX
  EnumerableTableScan(table=[[foodmart2, store]]){code}
We will find ProjectX exactly exists in RelSubset-A as Aggregate0-equiv1, thus 
FilterProjectTransposeRule will transform FilterX with Aggregate0-equiv0 as 
below:

 
{code:java}
From:
LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL($1), 1:BIGINT, 0:BIGINT), 
0), null:INTEGER, COALESCE($1, 0)), 2)])
  LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]])

To:
LogicalProject(store_street_address=[$5], $f1=[CASE(=(CASE(IS NOT NULL($16), 
1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))])
  LogicalFilter(condition=[>=($CASE(=(CASE(IS NOT NULL(CASE(=(CASE(IS NOT 
NULL($16), 1:BIGINT, 0:BIGINT), 0), null:INTEGER, COALESCE($16, 0))), 1:BIGINT, 
0:BIGINT), 0), null:INTEGER, COALESCE($1, 0)), 2)])
EnumerableTableScan(subset=[rel#62:Subset#0.NONE.[]], table=[[foodmart2, 
store]]){code}
 

As we can see, a more complex LogicalFilter is generated and step-2 & step-3  
alternate infinitely.

 

 

 

 

> Infinite rule matching when AggregateRemoveRule is enabled for SUM0
> ---
>
> Key: CALCITE-3124
> URL: https://issues.apache.org/jira/browse/CALCITE-3124
> Project: Calcite
>  Issue Type: Bug
>Reporter: Haisheng Yuan
>Assignee: Forward Xu
>Priority: Major
>
> Make the following changes (uncomment return clause) to AggregateRemoveRule, 
> the test case {{JdbcTest.testHavingNot2}} won't complete due to infinite rule 
> matching.
> {noformat}
> --- 

[jira] [Created] (CALCITE-3528) TIMESTAMPADD/TIMESTAMPDIFF can't handle unit of MILLISECOND

2019-11-21 Thread Zhenghua Gao (Jira)
Zhenghua Gao created CALCITE-3528:
-

 Summary: TIMESTAMPADD/TIMESTAMPDIFF can't handle unit of 
MILLISECOND
 Key: CALCITE-3528
 URL: https://issues.apache.org/jira/browse/CALCITE-3528
 Project: Calcite
  Issue Type: Bug
  Components: core
Affects Versions: 1.21.0
Reporter: Zhenghua Gao


Calcite's SQL parser can't handle unit of MILLISECOND for 
TIMESTAMPADD/TIMESTAMPDIFF functions. And I checked MySql[1], which can't 
handle unit of MILLISECOND too. 

Is there any reason to not support unit of MILLISECOND of the two function?

 [1] 
[https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_timestampadd]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CALCITE-3525) RexSimplify: eliminate redundant rex calls in OR

2019-11-21 Thread Igor Guzenko (Jira)


[ 
https://issues.apache.org/jira/browse/CALCITE-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979061#comment-16979061
 ] 

Igor Guzenko commented on CALCITE-3525:
---

Hello [~julianhyde], 

1. To be concrete the issue is more related to ITEM and I believe other 
function calls as well. Simplifier works fine for {code}(b > 4 and b < 3 and a 
= 3){code} where *b* is input ref, but in case from description *b* is rex call 
to ITEM operator. I think the term should be evaluated to false for 
deterministic functions with same arguments. 

2. Regarding parentheses I'll definitely align them, otherwise checkstyle will 
bite me. 

> RexSimplify: eliminate redundant rex calls in OR
> 
>
> Key: CALCITE-3525
> URL: https://issues.apache.org/jira/browse/CALCITE-3525
> Project: Calcite
>  Issue Type: Improvement
>Reporter: Igor Guzenko
>Assignee: Igor Guzenko
>Priority: Major
>
> Sample case to reproduce in {code}RexProgramTest.simplifyOrTerms{code}: 
> {code:java}
> // (a=1 or a=2 or (arr[1]>4 and arr[1]<3 and a=3)) => a=1 or a=2
> final RelDataType intArrayType = typeFactory.createArrayType(intType, -1);
> final RexInputRef ref0 = rexBuilder.makeInputRef(intType, 0);
> final RexInputRef ref3 = rexBuilder.makeInputRef(intArrayType, 3);
> final RexCall itm1 = (RexCall) rexBuilder.makeCall(intType, 
> SqlStdOperatorTable.ITEM,
> ImmutableList.of(ref3, literal1));
> simplify = this.simplify.withParanoid(false);
> checkSimplifyFilter(
> or(
> eq(ref0, literal1),
> eq(ref0, literal2),
> and(
> gt(itm1, literal4),
> lt(itm1, literal3),
> eq(ref0, literal3)
> )
> ),
> "OR(=($0, 1), =($0, 2))"
> );
> simplify = simplify.withParanoid(true);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)