[
https://issues.apache.org/jira/browse/FLINK-20913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17263878#comment-17263878
]
Rui Li commented on FLINK-20913:
--------------------------------
[[email protected]] Thanks for the explanations. According to hive
[docs|https://github.com/apache/hive/blob/rel/release-2.3.4/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L417],
it seems hive constructor assumes the {{jobConf}} only contains hadoop
properties. So the proposed fix looks good to me.
> Improve new HiveConf(jobConf, HiveConf.class)
> ---------------------------------------------
>
> Key: FLINK-20913
> URL: https://issues.apache.org/jira/browse/FLINK-20913
> Project: Flink
> Issue Type: New Feature
> Components: Connectors / Hive
> Affects Versions: 1.12.0, 1.12.1, 1.12.2
> Environment: Hive 2.0.1
> Flink 1.12.0
> Query with SQL client
> Reporter: Xingxing Di
> Priority: Major
>
> When we query hive tables We got an Exception in
> org.apache.flink.connectors.hive.util.HivePartitionUtils#getAllPartitions
> Exception:
>
> {code:java}
> org.apache.thrift.transport.TTransportException
> {code}
>
> SQL:
> {code:java}
> select * from dxx1 limit 1;
> {code}
>
> After debug we found that new HiveConf will override the configurations in
> jobConf,in my case `hive.metastore.sasl.enabled` was reset to `false`, which
> is unexpected.
> {code:java}
> // org.apache.flink.connectors.hive.util.HivePartitionUtils
> new HiveConf(jobConf, HiveConf.class){code}
>
> *I think we should add an HiveConfUtils to create HiveConf, which would be
> like this:*
>
> {code:java}
> HiveConf hiveConf = new HiveConf(jobConf, HiveConf.class);
> hiveConf.addResource(jobConf);{code}
> Above code can fix the error, i will make a PR if this improvement is
> acceptable.
>
> Here is the detail error stack:
> {code:java}
> 2021-01-10 17:27:11,995 WARN org.apache.flink.table.client.cli.CliClient
> [] - Could not execute SQL statement.2021-01-10 17:27:11,995
> WARN org.apache.flink.table.client.cli.CliClient [] - Could
> not execute SQL
> statement.org.apache.flink.table.client.gateway.SqlExecutionException:
> Invalid SQL query. at
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:527)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:365)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:634)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:324)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_202] at
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:216)
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:141)
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:114)
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:196)
> [flink-sql-client_2.11-1.12.0.jar:1.12.0]Caused by:
> org.apache.flink.connectors.hive.FlinkHiveException: Failed to collect all
> partitions from hive metaStore at
> org.apache.flink.connectors.hive.util.HivePartitionUtils.getAllPartitions(HivePartitionUtils.java:142)
> ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:133)
> ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.connectors.hive.HiveTableSource$1.produceDataStream(HiveTableSource.java:119)
> ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:94)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:44)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:44)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:105)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:47)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlan(BatchExecLimit.scala:47)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecExchange.translateToPlanInternal(BatchExecExchange.scala:141)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecExchange.translateToPlanInternal(BatchExecExchange.scala:52)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecExchange.translateToPlan(BatchExecExchange.scala:52)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:105)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:47)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlan(BatchExecLimit.scala:47)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToTransformation(BatchExecLegacySink.scala:129)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:95)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:48)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlan(BatchExecLegacySink.scala:48)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:86)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:85)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:85)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1267)
> ~[flink-table_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1259)
> ~[flink-table_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:327)
> ~[flink-table_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:286)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:257)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:283)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:521)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] ... 8 moreCaused by:
> org.apache.thrift.transport.TTransportException at
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table(ThriftHiveMetastore.java:1282)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table(ThriftHiveMetastore.java:1268)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1272)
> ~[hive-exec-2.0.1.jar:2.0.1] at
> sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) ~[?:?] at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:1.8.0_202] at java.lang.reflect.Method.invoke(Method.java:498)
> ~[?:1.8.0_202] at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)
> ~[hive-exec-2.0.1.jar:2.0.1] at com.sun.proxy.$Proxy32.getTable(Unknown
> Source) ~[?:?] at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
> ~[?:?] at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:1.8.0_202] at java.lang.reflect.Method.invoke(Method.java:498)
> ~[?:1.8.0_202] at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2095)
> ~[hive-exec-2.0.1.jar:2.0.1] at com.sun.proxy.$Proxy32.getTable(Unknown
> Source) ~[?:?] at
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.getTable(HiveMetastoreClientWrapper.java:117)
> ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.connectors.hive.util.HivePartitionUtils.getAllPartitions(HivePartitionUtils.java:114)
> ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:133)
> ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.connectors.hive.HiveTableSource$1.produceDataStream(HiveTableSource.java:119)
> ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:94)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:44)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:44)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:105)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:47)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlan(BatchExecLimit.scala:47)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecExchange.translateToPlanInternal(BatchExecExchange.scala:141)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecExchange.translateToPlanInternal(BatchExecExchange.scala:52)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecExchange.translateToPlan(BatchExecExchange.scala:52)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:105)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:47)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlan(BatchExecLimit.scala:47)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToTransformation(BatchExecLegacySink.scala:129)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:95)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlanInternal(BatchExecLegacySink.scala:48)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLegacySink.translateToPlan(BatchExecLegacySink.scala:48)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:86)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.delegation.BatchPlanner$$anonfun$translateToPlan$1.apply(BatchPlanner.scala:85)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.Iterator$class.foreach(Iterator.scala:891)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> ~[flink-dist_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.delegation.BatchPlanner.translateToPlan(BatchPlanner.scala:85)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:167)
> ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1267)
> ~[flink-table_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1259)
> ~[flink-table_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:327)
> ~[flink-table_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$createPipeline$1(ExecutionContext.java:286)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:257)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.ExecutionContext.createPipeline(ExecutionContext.java:283)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:521)
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] ... 8 more
> {code}
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)