[ 
https://issues.apache.org/jira/browse/FLINK-15284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15284:
-------------------------------
    Description: 
*The sql is:*

CREATE TABLE `t` (
 x INT
 ) WITH (
 'format.field-delimiter'=',',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
 'format.type'='csv'
 );

SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;

*The exception is:*

[ERROR] Could not execute SQL statement. Reason:
 org.apache.flink.table.api.TableException: Failed to push project into table 
source! table source with pushdown capability must override and change 
explainSource() API to explain the pushdown applied!

 

 

*The whole exception is:*

Caused by: org.apache.flink.table.api.TableException: Sql optimization: Cannot 
generate a valid execution plan for the given query:Caused by: 
org.apache.flink.table.api.TableException: Sql optimization: Cannot generate a 
valid execution plan for the given query:
 
LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
 fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])+   - 
LogicalTableScan(table=[[default_catalog, default_database, t, source: 
[CsvTableSource(read fields: x)]]])
 Failed to push project into table source! table source with pushdown 
capability must override and change explainSource() API to explain the pushdown 
applied!Please check the documentation for the set of currently supported SQL 
features. at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
 at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
 at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
 at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
 at scala.collection.immutable.List.foreach(List.scala:392) at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
 at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryAndPersistInternal$14(LocalExecutor.java:701)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:231)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:699)
 ... 8 moreCaused by: org.apache.flink.table.api.TableException: Failed to push 
project into table source! table source with pushdown capability must override 
and change explainSource() API to explain the pushdown applied! at 
org.apache.flink.table.planner.plan.rules.logical.PushProjectIntoTableSourceScanRule.onMatch(PushProjectIntoTableSourceScanRule.scala:84)
 at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:208)
 at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:631)
 at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:327) at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64)
 ... 35 more^C

 

  was:
*The sql is:*

CREATE TABLE `t` (
 x INT
 ) WITH (
 'format.field-delimiter'=',',
 'connector.type'='filesystem',
 'format.derive-schema'='true',
 
'connector.path'='hdfs://zthdev/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
 'format.type'='csv'
 );

SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;

*The exception is:*

[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.TableException: Failed to push project into table 
source! table source with pushdown capability must override and change 
explainSource() API to explain the pushdown applied!

 

 

*The whole exception is:***

Caused by: org.apache.flink.table.api.TableException: Sql optimization: Cannot 
generate a valid execution plan for the given query:Caused by: 
org.apache.flink.table.api.TableException: Sql optimization: Cannot generate a 
valid execution plan for the given query:
LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
 fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])   +- 
LogicalTableScan(table=[[default_catalog, default_database, t, source: 
[CsvTableSource(read fields: x)]]])
Failed to push project into table source! table source with pushdown capability 
must override and change explainSource() API to explain the pushdown 
applied!Please check the documentation for the set of currently supported SQL 
features. at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
 at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
 at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
 at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
 at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
 at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
 at scala.collection.immutable.List.foreach(List.scala:392) at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
 at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
 at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
 at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
 at 
org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) at 
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryAndPersistInternal$14(LocalExecutor.java:701)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:231)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:699)
 ... 8 moreCaused by: org.apache.flink.table.api.TableException: Failed to push 
project into table source! table source with pushdown capability must override 
and change explainSource() API to explain the pushdown applied! at 
org.apache.flink.table.planner.plan.rules.logical.PushProjectIntoTableSourceScanRule.onMatch(PushProjectIntoTableSourceScanRule.scala:84)
 at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:208)
 at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:631)
 at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:327) at 
org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64)
 ... 35 more^C

 


> Sql error (Failed to push project into table source!)
> -----------------------------------------------------
>
>                 Key: FLINK-15284
>                 URL: https://issues.apache.org/jira/browse/FLINK-15284
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Client
>    Affects Versions: 1.10.0
>            Reporter: xiaojin.wy
>            Priority: Major
>
> *The sql is:*
> CREATE TABLE `t` (
>  x INT
>  ) WITH (
>  'format.field-delimiter'=',',
>  'connector.type'='filesystem',
>  'format.derive-schema'='true',
>  
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_binary_comparison_coercion/sources/t.csv',
>  'format.type'='csv'
>  );
> SELECT cast(' ' as BINARY(2)) = X'0020' FROM t;
> *The exception is:*
> [ERROR] Could not execute SQL statement. Reason:
>  org.apache.flink.table.api.TableException: Failed to push project into table 
> source! table source with pushdown capability must override and change 
> explainSource() API to explain the pushdown applied!
>  
>  
> *The whole exception is:*
> Caused by: org.apache.flink.table.api.TableException: Sql optimization: 
> Cannot generate a valid execution plan for the given query:Caused by: 
> org.apache.flink.table.api.TableException: Sql optimization: Cannot generate 
> a valid execution plan for the given query:
>  
> LogicalSink(name=[`default_catalog`.`default_database`.`_tmp_table_2136189659`],
>  fields=[EXPR$0])+- LogicalProject(EXPR$0=[false])+   - 
> LogicalTableScan(table=[[default_catalog, default_database, t, source: 
> [CsvTableSource(read fields: x)]]])
>  Failed to push project into table source! table source with pushdown 
> capability must override and change explainSource() API to explain the 
> pushdown applied!Please check the documentation for the set of currently 
> supported SQL features. at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:86)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
>  at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at 
> scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at 
> scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:83)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:56)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:44)
>  at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:223)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:680)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertIntoInternal(TableEnvironmentImpl.java:353)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.insertInto(TableEnvironmentImpl.java:341)
>  at 
> org.apache.flink.table.api.internal.TableImpl.insertInto(TableImpl.java:428) 
> at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeQueryAndPersistInternal$14(LocalExecutor.java:701)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:231)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryAndPersistInternal(LocalExecutor.java:699)
>  ... 8 moreCaused by: org.apache.flink.table.api.TableException: Failed to 
> push project into table source! table source with pushdown capability must 
> override and change explainSource() API to explain the pushdown applied! at 
> org.apache.flink.table.planner.plan.rules.logical.PushProjectIntoTableSourceScanRule.onMatch(PushProjectIntoTableSourceScanRule.scala:84)
>  at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:208)
>  at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:631)
>  at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:327) 
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64)
>  ... 35 more^C
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to