[
https://issues.apache.org/jira/browse/FLINK-17022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Flink Jira Bot updated FLINK-17022:
-----------------------------------
Labels: auto-deprioritized-major auto-unassigned stale-minor (was:
auto-deprioritized-major auto-unassigned)
I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help
the community manage its development. I see this issues has been marked as
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is
still Minor, please either assign yourself or give an update. Afterwards,
please remove the label or in 7 days the issue will be deprioritized.
> Flink SQL ROW_NUMBER() Exception: TableException: This calc has no useful
> projection and no filter. It should be removed by CalcRemoveRule.
> -------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-17022
> URL: https://issues.apache.org/jira/browse/FLINK-17022
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / Planner
> Affects Versions: 1.10.0
> Environment: flink 1.10 with PR:
> https://issues.apache.org/jira/browse/FLINK-16068
> https://issues.apache.org/jira/browse/FLINK-16345
> Reporter: xingoo
> Priority: Minor
> Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> exception:
> {code:java}
> //代码占位符
> Caused by: org.apache.flink.table.api.TableException: This calc has no useful
> projection and no filter. It should be removed by CalcRemoveRule.
> at
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateProcessCode(CalcCodeGenerator.scala:176)
> at
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateCalcOperator(CalcCodeGenerator.scala:49)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:77)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:39)
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalcBase.translateToPlan(StreamExecCalcBase.scala:38)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.scala:84)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.scala:44)
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlan(StreamExecExchange.scala:44)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecRank.translateToPlanInternal(StreamExecRank.scala:209)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecRank.translateToPlanInternal(StreamExecRank.scala:53)
> at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
> at
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecRank.translateToPlan(StreamExecRank.scala:53)
> at
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
> at
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:59)
> at
> org.apache.flink.table.planner.delegation.StreamPlanner.explain(StreamPlanner.scala:81)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.explain(TableEnvironmentImpl.java:447)
> at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.explain(TableEnvironmentImpl.java:442)
> at
> com.ververica.flink.table.gateway.operation.ExplainOperation.lambda$execute$0(ExplainOperation.java:53)
> at
> com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:230)
> at
> com.ververica.flink.table.gateway.operation.ExplainOperation.execute(ExplainOperation.java:53)
> ... 45 more
> {code}
> sql:
> {code:java}
> //代码占位符
> create view v1 as
> select a, b, count(1) as c
> from test_kafka_t
> group by a,b,HOP(ts, INTERVAL '10' SECOND, INTERVAL '1' MINUTE);
> explain
> select * from (
> SELECT *, row_number() over (PARTITION BY a ORDER BY c) AS rn
> FROM v1
> -- where 1=1 -- this can fix
> )
> where rn <= 5
> {code}
> kafka topic:
> {code:java}
> //代码占位符
> CREATE TABLE test_kafka_t (
> a varchar,
> b int,
> ts as PROCTIME()
> ) WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = '0.11',
> 'connector.topic' = 'xx',
> 'connector.properties.zookeeper.connect' = 'xx',
> 'connector.properties.bootstrap.servers' = 'xx',
> 'connector.properties.group.id' = 'testGroup',
> 'connector.startup-mode' = 'latest-offset',
> 'format.type' = 'json'
> )
> {code}
--
This message was sent by Atlassian Jira
(v8.20.1#820001)