[ 
https://issues.apache.org/jira/browse/CALCITE-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17606223#comment-17606223
 ] 

Jiajun Xie edited comment on CALCITE-5286 at 9/18/22 1:44 AM:
--------------------------------------------------------------

[~kramerul] . Are you using EnumerableLimit? Your error is throw by 
VolcanoPlanner, but L offset/fetch should belongs to EnumerableLimit, not Sort. 
Maybe you can use EnumerableLimit to solve your probleam?

 

Then I have a new problem: When I only use HepPlanner, StackOverFlow will be 
thrown.

Here is  UT in RelMetadataTest. Is it the wrong way I use it? Or is this a bug?
{code:java}
@Test void testRowCountStackOverFlow() {
  final String sql = "select * from sales.emp e left join (\n"
      + " select * from sales.dept d) d on e.deptno = d.deptno\n"
      + " order by sal limit ?";
  sql(sql).withRelTransform(rel -> {
            HepProgramBuilder builder = HepProgram.builder();
            builder.addRuleInstance(CoreRules.SORT_PROJECT_TRANSPOSE);
            HepPlanner prePlanner = new HepPlanner(builder.build());
            prePlanner.setRoot(rel);
            final RelNode r1 = prePlanner.findBestExp();
            builder = HepProgram.builder();
            new VolcanoPlanner().addRule(CoreRules.SORT_JOIN_TRANSPOSE);
            builder.addRuleInstance(CoreRules.SORT_JOIN_TRANSPOSE);
            HepPlanner planner = new HepPlanner(builder.build());
            planner.setRoot(r1);
            return planner.findBestExp(); // StackOverFlow
          }
      )
      .assertThatRowCount(is(EMP_SIZE), is(0D), is(Double.POSITIVE_INFINITY));
} {code}
 


was (Author: jiajunbernoulli):
[~kramerul] . Are you using EnumerableLimit? Your error is throw by 
VolcanoPlanner, but L offset/fetch should belongs to EnumerableLimit, not Sort. 
Maybe you can use EnumerableLimit to solve your probleam?

 

Then I have a new problem: When I only use HepPlanner, StackOverFlow will be 
throw.

Here is UT in RelMetadataTest. Is it the wrong way I use it? Or is this a bug?

 
{code:java}
@Test void testRowCountStackOverFlow() {
  final String sql = "select * from sales.emp e left join (\n"
      + " select * from sales.dept d) d on e.deptno = d.deptno\n"
      + " order by sal limit ?";
  sql(sql).withRelTransform(rel -> {
            HepProgramBuilder builder = HepProgram.builder();
            builder.addRuleInstance(CoreRules.SORT_PROJECT_TRANSPOSE);
            HepPlanner prePlanner = new HepPlanner(builder.build());
            prePlanner.setRoot(rel);
            final RelNode r1 = prePlanner.findBestExp();
            builder = HepProgram.builder();
            new VolcanoPlanner().addRule(CoreRules.SORT_JOIN_TRANSPOSE);
            builder.addRuleInstance(CoreRules.SORT_JOIN_TRANSPOSE);
            HepPlanner planner = new HepPlanner(builder.build());
            planner.setRoot(r1);
            return planner.findBestExp(); // StackOverFlow
          }
      )
      .assertThatRowCount(is(EMP_SIZE), is(0D), is(Double.POSITIVE_INFINITY));
} {code}
 

 

 

> Join with parameterized LIMIT throws AssertionError "not a literal"
> -------------------------------------------------------------------
>
>                 Key: CALCITE-5286
>                 URL: https://issues.apache.org/jira/browse/CALCITE-5286
>             Project: Calcite
>          Issue Type: Bug
>            Reporter: Ulrich Kramer
>            Priority: Major
>
> A query like the following one
> {code:java}
> select T."name", T."valueLeverId", T."type", T."ID", T."parentId" 
> from (
>   SELECT VD."id" as ID, VD."name", VD."typeId", VD."type", VD."valueLeverId", 
> VD."valueLever", VD."parentId", VDtoSC."VDtoSC_List"
>       FROM VD 
>       LEFT JOIN VDtoSC 
>       ON VD."id" = VDtoSC."Value_Driver_ID"
> ) AS T
> where T."ID" = ? limit ?
> {code}
> fails with
> {code:java}
> findValue:1208, RexLiteral (org.apache.calcite.rex)
> intValue:1183, RexLiteral (org.apache.calcite.rex)
> getMaxRowCount:207, RelMdMaxRowCount (org.apache.calcite.rel.metadata)
> getMaxRowCount_$:-1, GeneratedMetadata_MaxRowCountHandler 
> (org.apache.calcite.rel.metadata.janino)
> getMaxRowCount:-1, GeneratedMetadata_MaxRowCountHandler 
> (org.apache.calcite.rel.metadata.janino)
> getMaxRowCount:277, RelMetadataQuery (org.apache.calcite.rel.metadata)
> alreadySmaller:914, RelMdUtil (org.apache.calcite.rel.metadata)
> checkInputForCollationAndLimit:887, RelMdUtil 
> (org.apache.calcite.rel.metadata)
> onMatch:138, SortJoinTransposeRule (org.apache.calcite.rel.rules)
> onMatch:223, VolcanoRuleCall (org.apache.calcite.plan.volcano)
> drive:59, IterativeRuleDriver (org.apache.calcite.plan.volcano)
> findBestExp:523, VolcanoPlanner (org.apache.calcite.plan.volcano)
> lambda$standard$3:276, Programs (org.apache.calcite.tools)
> run:-1, Programs$$Lambda$2787/0x000000080121f9c0 (org.apache.calcite.tools)
> run:336, Programs$SequenceProgram (org.apache.calcite.tools)
> transform:373, PlannerImpl (org.apache.calcite.prepare)
> {code}
> The 2 tables are located in a schema where joins can't be pushed down.
> See also CALCITE-5048  --
> CALCITE-2061
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to