xiaojin.wy created FLINK-15238:
----------------------------------

             Summary: A sql can't generate a valid execution plan
                 Key: FLINK-15238
                 URL: https://issues.apache.org/jira/browse/FLINK-15238
             Project: Flink
          Issue Type: Bug
          Components: Table SQL / Client
    Affects Versions: 1.10.0
            Reporter: xiaojin.wy


The table and the query is like this:

 

 

After execution the sql, the exception will appear:

[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:

LogicalProject(deptno=[$0], x=[$3])
 LogicalJoin(condition=[true], joinType=[left])
 LogicalTableScan(table=[[default_catalog, default_database, scott_dept]])
 LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])
 LogicalProject(empno=[$0])
 LogicalTableScan(table=[[default_catalog, default_database, scott_emp]])

This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL features.

 

 

The whole exception is:

Caused by: org.apache.flink.table.api.TableException: Cannot generate a valid 
execution plan for the given query:Caused by: 
org.apache.flink.table.api.TableException: Cannot generate a valid execution 
plan for the given query:
LogicalProject(deptno=[$0], x=[$3])  LogicalJoin(condition=[true], 
joinType=[left])    LogicalTableScan(table=[[default_catalog, default_database, 
scott_dept]])    LogicalSort(sort0=[$0], dir0=[ASC], fetch=[1])      
LogicalProject(empno=[$0])        LogicalTableScan(table=[[default_catalog, 
default_database, scott_emp]])
This exception indicates that the query uses an unsupported SQL feature.Please 
check the documentation for the set of currently supported SQL features. at 
org.apache.flink.table.plan.Optimizer.runVolcanoPlanner(Optimizer.scala:284) at 
org.apache.flink.table.plan.Optimizer.optimizeLogicalPlan(Optimizer.scala:199) 
at 
org.apache.flink.table.plan.StreamOptimizer.optimize(StreamOptimizer.scala:66) 
at 
org.apache.flink.table.planner.StreamPlanner.translateToType(StreamPlanner.scala:389)
 at 
org.apache.flink.table.planner.StreamPlanner.writeToRetractSink(StreamPlanner.scala:308)
 at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$writeToSink(StreamPlanner.scala:272)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:166)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:145)
 at scala.Option.map(Option.scala:146) at 
org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:145)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:117)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.StreamPlanner.translate(StreamPlanner.scala:117) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryInternal$12(LocalExecutor.java:640)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:227)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:638)
 ... 8 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to