[ 
https://issues.apache.org/jira/browse/FLINK-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22118:
-----------------------------------
      Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
    Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Always apply projection push down in blink planner
> --------------------------------------------------
>
>                 Key: FLINK-22118
>                 URL: https://issues.apache.org/jira/browse/FLINK-22118
>             Project: Flink
>          Issue Type: Improvement
>          Components: Table SQL / Planner
>    Affects Versions: 1.13.0
>            Reporter: Shengkai Fang
>            Priority: Not a Priority
>              Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Please add the case in `TableSourceTest`.
> {code:java}
>   s"""
>          |CREATE TABLE NestedItemTable (
>          |  `id` INT,
>          |  `result` ROW<
>          |     `data_arr` ROW<`value` BIGINT> ARRAY,
>          |     `data_map` MAP<STRING, ROW<`value` BIGINT>>>,
>          |  ) WITH (
>          |    'connector' = 'values',
>          |    'nested-projection-supported' = 'true',
>          |    'bounded' = 'true'
>          |  )
>          |""".stripMargin
>     util.tableEnv.executeSql(ddl4)
>     util.verifyExecPlan(
>       s"""
>          |SELECT
>          |  `result`.`data_arr`[`id`].`value`,
>          |  `result`.`data_map`['item'].`value`
>          |FROM NestedItemTable
>          |""".stripMargin
>     )
> {code}
> we can get optimized plan
> {code:java}
> Calc(select=[ITEM(result.data_arr, id).value AS EXPR$0, ITEM(result.data_map, 
> _UTF-16LE'item').value AS EXPR$1])
> +- TableSourceScan(table=[[default_catalog, default_database, 
> NestedItemTable]], fields=[id, result])
> {code}
> but expected is
> {code:java}
> Calc(select=[ITEM(result_data_arr, id).value AS EXPR$0, ITEM(result_data_map, 
> _UTF-16LE'item').value AS EXPR$1])
> +- TableSourceScan(table=[[default_catalog, default_database, 
> NestedItemTable, project=[result_data_arr, result_data_map, id]]], 
> fields=[result_data_arr, result_data_map, id])
> {code}
> It seems the planner doesn't apply the rule to push projection into scan. The 
> reason why we have different results is the optimized plan has more fields 
> than before.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to