[ 
https://issues.apache.org/jira/browse/FLINK-24716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17436901#comment-17436901
 ] 

Wenlong Lyu commented on FLINK-24716:
-------------------------------------

I think the plan is correct, the swallowed predicate has been reflected in the 
empty remaining partition: partitions=[]

> Non-equality predicates on partition columns lead to incorrect plans
> --------------------------------------------------------------------
>
>                 Key: FLINK-24716
>                 URL: https://issues.apache.org/jira/browse/FLINK-24716
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Table SQL / Planner
>            Reporter: Timo Walther
>            Priority: Major
>
> Queries such as
> {code}
> SELECT d FROM T1 WHERE c = 100 AND d > '2012'
> {code}
> where {{d}} is a partition column leads to incorrect plans:
> {code}
> == Abstract Syntax Tree ==
> LogicalProject(d=[$2])
> +- LogicalFilter(condition=[AND(=($0, 100), >($2, _UTF-16LE'2012'))])
>    +- LogicalTableScan(table=[[default_catalog, default_database, T1]])
> == Optimized Physical Plan ==
> Calc(select=[d], where=[=(c, 100)])
> +- TableSourceScan(table=[[default_catalog, default_database, T1, filter=[], 
> partitions=[], project=[c, d]]], fields=[c, d])
> == Optimized Execution Plan ==
> Calc(select=[d], where=[(c = 100)])
> +- TableSourceScan(table=[[default_catalog, default_database, T1, filter=[], 
> partitions=[], project=[c, d]]], fields=[c, d])
> {code}
> It seems in many cases (with SupportsFilterPushDown and without) the {{<}} 
> predicate is swallowed and not part of the final execution plan anymore.
> Reproducible code can be found 
> [here|https://github.com/twalthr/flink/blob/e5a2cc9bcc9b38cf2b94c9ea7c7296ce94434343/flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/plan/rules/logical/TestClass.java]
>  with new testing infrastructure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to