[ 
https://issues.apache.org/jira/browse/FLINK-30006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-30006:
-----------------------------------
    Labels: pull-request-available  (was: )

> Cannot remove columns that are incorrectly considered constants from an 
> Aggregate In Streaming
> ----------------------------------------------------------------------------------------------
>
>                 Key: FLINK-30006
>                 URL: https://issues.apache.org/jira/browse/FLINK-30006
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Planner
>    Affects Versions: 1.16.0
>            Reporter: lincoln lee
>            Assignee: lincoln lee
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.17.0
>
>
> In Streaming, columns generated by dynamic functions are incorrectly 
> considered constants and removed from an Aggregate via optimization rule 
> `CoreRules.AGGREGATE_PROJECT_PULL_UP_CONSTANTS` (inside the RelMdPredicates, 
> it only considers the non-deterministic functions, but this doesn't 
> applicable for streaming)
> an example query:
> {code}
>   @Test
>   def testReduceGroupKey(): Unit = {
>     util.tableEnv.executeSql("""
>                                |CREATE TABLE t1(
>                                | a int,
>                                | b varchar,
>                                | cat VARCHAR,
>                                | gmt_date DATE,
>                                | cnt BIGINT,
>                                | PRIMARY KEY (cat) NOT ENFORCED
>                                |) WITH (
>                                | 'connector' = 'values'
>                                |)
>                                |""".stripMargin)
>     util.verifyExecPlan(s"""
>                            |SELECT
>                            |     cat, gmt_date, SUM(cnt), count(*)
>                            |FROM t1
>                            |WHERE gmt_date = current_date
>                            |GROUP BY cat, gmt_date
>                            |""".stripMargin)
>   }
> {code}
> the wrong plan:
> {code}
> Calc(select=[cat, CAST(CURRENT_DATE() AS DATE) AS gmt_date, EXPR$2, EXPR$3])
> +- GroupAggregate(groupBy=[cat], select=[cat, SUM(cnt) AS EXPR$2, COUNT(*) AS 
> EXPR$3])
>    +- Exchange(distribution=[hash[cat]])
>       +- Calc(select=[cat, cnt], where=[=(gmt_date, CURRENT_DATE())])
>          +- TableSourceScan(table=[[default_catalog, default_database, t1, 
> filter=[], project=[cat, cnt, gmt_date], metadata=[]]], fields=[cat, cnt, 
> gmt_date])
> {code}
> expect plan:
> {code}
> GroupAggregate(groupBy=[cat, gmt_date], select=[cat, gmt_date, SUM(cnt) AS 
> EXPR$2, COUNT(*) AS EXPR$3])
> +- Exchange(distribution=[hash[cat, gmt_date]])
>    +- Calc(select=[cat, gmt_date, cnt], where=[(gmt_date = CURRENT_DATE())])
>       +- TableSourceScan(table=[[default_catalog, default_database, t1, 
> filter=[], project=[cat, gmt_date, cnt], metadata=[]]], fields=[cat, 
> gmt_date, cnt])
> {code}
> In addition to this issue, we need to check all optimization rules in 
> streaming completely to avoid similar problems.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to