[ 
https://issues.apache.org/jira/browse/FLINK-20809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256920#comment-17256920
 ] 

Shengkai Fang commented on FLINK-20809:
---------------------------------------

Hi all. I don't think it's a bug.

Currently, the rule {{PushLimitIntoTableSourceScanRule}} captures the structure 
that the {{FlinkLogicalSort}} node is the parent of the 
{{FlinkLogicalTableSourceScan}}.

In this case, we have a Calc(filter) node between the Sort node and the Scan 
node, which makes the rule fails to apply.

In semantic, we can only push down the filter first and then push down the 
limit. If we only push the limit into the source, the output row may not 
satisify the condition.

Currently, we have the rule to push down the filter, project and limit. The 
main problem is to support filter push down for hive connector.

> Limit push down with Hive table doesn't work when using with filter
> -------------------------------------------------------------------
>
>                 Key: FLINK-20809
>                 URL: https://issues.apache.org/jira/browse/FLINK-20809
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Hive
>    Affects Versions: 1.12.0
>            Reporter: Jun Zhang
>            Priority: Major
>             Fix For: 1.13.0
>
>
> when I use flink sql to query hive table , like this 
> {code:java}
> // select * from hive_table where id = 1 limit 1
> {code}
>  
> when the sql contain query conditions in where clause, I found that the limit 
> push down is invalid.
> I look up the comment on source code , I think it is should be push down , is 
> it a bug ?
> [the comment 
> |https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushLimitIntoTableSourceScanRule.java#L64]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to