[ 
https://issues.apache.org/jira/browse/FLINK-33168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17773891#comment-17773891
 ] 

luoyuxia edited comment on FLINK-33168 at 10/11/23 4:16 AM:
------------------------------------------------------------

I tried with puting table-planner-loader.jar in /lib and it works. Since it's 
expected in Flink 1.18 as you can see in FLINK-31575, we recommend to not to 
swap table-planner-loader and table-planner, I would like to close it. Feel 
free to open it again when you still meet problems. 

The reason may be a a little of complex, we include 
{{org/apache/calcite/plan/RelOptRule.class}} in flink-sql-connector-hive 3.1.3, 
the reason can be seen in 
[here|https://github.com/apache/flink/blob/5269631af525a01d944cfa9994a116fb27b80b1b/flink-connectors/flink-sql-connector-hive-3.1.3/pom.xml#L198].
 
Then, the planner will load the class {{RelOptRule}} in 
flink-sql-connector-hive 3.1.3, but the class RelOptRule in complied in 
flink-sql-connector-hive which shade {{com.google}} to 
{{{}org.apache.flink.hive.shaded.com.google{}}}.
But {{RelOptRule}} will refer to 
{{{}com.google.common.collect.ImmutableList{}}}, afte complie, it will then 
become 
to refer to Field 
{{{}org/apache/calcite/plan/RelOptRuleOperandChildren.operands:Lorg/apache/flink/hive/shaded/com/google/common/collect/ImmutableList{}}}.

But RelOptRuleOperandChildren is compiled in flink-table-planner which shade 
{{com.google}} to {{{}org.apache.flink.calcite.shaded.com.google{}}}, so it 
will only contain a field 
{{{}org/apache/flink/calcite/shaded/com/google/common/collect/ImmutableList{}}}.
Then it'll casue {{{}java.lang.NoSuchFieldError: operands{}}}.

It's simliar to the issue FLINK-32286.


was (Author: luoyuxia):
I tried with puting table-planner-loader.jar in /lib and it works. Since it's 
expected in Flink 1.18 as you can see in FLINK-31575, we recommend to not to 
swap table-planner-loader and table-planner, I would like to close it. Feel 
free to open it again when you still meet problems. 


The reason may be a a little of complex, we include 
\{{org/apache/calcite/plan/RelOptRule.class}} in flink-sql-connector-hive 
3.1.3, the reason can be seen in here. 
Then, the planner will load the class \{{RelOptRule}} in 
flink-sql-connector-hive 3.1.3, but the class RelOptRule in complied in 
flink-sql-connector-hive which shade \{{com.google}} to 
\{{org.apache.flink.hive.shaded.com.google}}.
But \{{RelOptRule}} will refer to \{{com.google.common.collect.ImmutableList}}, 
afte complie, it will then become 
to refer to Field 
\{{org/apache/calcite/plan/RelOptRuleOperandChildren.operands:Lorg/apache/flink/hive/shaded/com/google/common/collect/ImmutableList}}.

But RelOptRuleOperandChildren is compiled in flink-table-planner which shade 
\{{com.google}} to \{{org.apache.flink.calcite.shaded.com.google}}, so it will 
only contain a field 
\{{org/apache/flink/calcite/shaded/com/google/common/collect/ImmutableList}}.
Then it'll casue \{{java.lang.NoSuchFieldError: operands}}.

It's simliar to the issue FLINK-32286.

> An error occurred when executing sql, java.lang.NoSuchFieldError: operands
> --------------------------------------------------------------------------
>
>                 Key: FLINK-33168
>                 URL: https://issues.apache.org/jira/browse/FLINK-33168
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Client
>    Affects Versions: 1.18.0
>            Reporter: macdoor615
>            Assignee: Zheng yunhong
>            Priority: Major
>
> Environment:
>  
> {code:java}
> Linux hb3-prod-hadoop-006 4.18.0-477.27.1.el8_8.x86_64 #1 SMP Thu Sep 21 
> 06:49:25 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux
> openjdk version "1.8.0_382"
> OpenJDK Runtime Environment (build 1.8.0_382-b05)
> OpenJDK 64-Bit Server VM (build 25.382-b05, mixed mode)
> flink-1.18.0-RC1 , 
> https://github.com/apache/flink/releases/tag/release-1.18.0-rc1
> {code}
>  
> I execute the following sql in sql-client.sh.
>  
> {code:java}
> insert into svc1_paimon_prod.cq.b_customer_ecus
> select
>   rcus.id id,
>   if(cus.id is not null, cus.id, try_cast(NULL as string)) cus_id,    
>   if(cus.id is null and cus_rownum = 1, rcus.id, try_cast(NULL as string)) 
> newCus_id,
>   companyID,
>   customerProvinceNumber,
>   mobilePhone,
>   oprCode,
>   customerNum,
>   staffName,
>   location,
>   staffNumber,
>   extendInfo,
>   customerName,
>   case when companyID='000' then '名称1'
>        when companyID='002' then '名称2'
>        else '新名称'
>        end prov,
>   row (
>     accessToken,
>     busType,
>     cutOffDay,
>     domain,
>     envFlag,
>     routeType,
>     routeValue,
>     sessionID,
>     sign,
>     signMethod,
>     org_timeStamp,
>     transIDO,
>     userPartyID,
>     version
>   ) raw_message,
>   named_struct(
>     'id', cus.id,
>     'name', cus.name,
>     'code', cus.code,
>     'customerlevel', cus.customerlevel,
>     'prov', cus.prov,
>     'container', cus.container,
>     'crtime', cus.crtime,
>     'updtime', cus.updtime
>   ) existing_cus,
>   cus_rownum,
>   to_timestamp(org_timeStamp, 'yyyyMMddHHmmss') as org_timeStamp,
>   raw_rowtime,
>   localtimestamp as raw_rowtime1,
>   dt
> from svc1_paimon_prod.raw_data.abscustinfoserv_content_append_cq
>   /*+ OPTIONS('consumer-id' = '创建新客户id') */
>   rcus
> left join svc1_mysql_test.gem_svc1_vpn.bv_customer
> FOR SYSTEM_TIME AS OF rcus.proctime AS cus on rcus.customerNum=cus.code
> {code}
> There are the following jar files in the flink/lib directory.
> {code:java}
> commons-cli-1.5.0.jar
> flink-cep-1.18.0.jar
> flink-connector-files-1.18.0.jar
> flink-connector-jdbc-3.1.1-1.17.jar
> flink-csv-1.18.0.jar
> flink-dist-1.18.0.jar
> flink-json-1.18.0.jar
> flink-orc-1.18.0.jar
> flink-parquet-1.18.0.jar
> flink-scala_2.12-1.18.0.jar
> flink-sql-avro-1.18.0.jar
> flink-sql-avro-confluent-registry-1.18.0.jar
> flink-sql-connector-elasticsearch7-3.0.0-1.16.jar
> flink-sql-connector-hive-3.1.3_2.12-1.18.0.jar
> flink-sql-connector-kafka-3.0.0-1.17.jar
> flink-sql-orc-1.18.0.jar
> flink-sql-parquet-1.18.0.jar
> flink-table-api-java-uber-1.18.0.jar
> flink-table-api-scala_2.12-1.18.0.jar
> flink-table-api-scala-bridge_2.12-1.18.0.jar
> flink-table-planner_2.12-1.18.0.jar
> flink-table-runtime-1.18.0.jar
> jline-reader-3.23.0.jar
> jline-terminal-3.23.0.jar
> kafka-clients-3.5.1.jar
> log4j-1.2-api-2.17.1.jar
> log4j-api-2.17.1.jar
> log4j-core-2.17.1.jar
> log4j-slf4j-impl-2.17.1.jar
> mysql-connector-j-8.1.0.jar
> paimon-flink-1.18-0.6-20230929.002044-11.jar{code}
> Works correctly in version 1.17.1, but produces the following error in 
> 1.18.0-RC1
>  
> {code:java}
> 2023-09-29 14:04:11,438 ERROR 
> org.apache.flink.table.gateway.service.operation.OperationManager [] - Failed 
> to execute the operation fe1b0a58-b822-49c0-b1ae-ce73d16f92da.
> java.lang.NoSuchFieldError: operands
> at org.apache.calcite.plan.RelOptRule.operand(RelOptRule.java:129) 
> ~[flink-sql-connector-hive-3.1.3_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.<init>(SimplifyFilterConditionRule.scala:36)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule$.<init>(SimplifyFilterConditionRule.scala:94)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule$.<clinit>(SimplifyFilterConditionRule.scala)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets$.<init>(FlinkStreamRuleSets.scala:35)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets$.<clinit>(FlinkStreamRuleSets.scala)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkStreamProgram$.buildProgram(FlinkStreamProgram.scala:57)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeTree$1(StreamCommonSubGraphBasedOptimizer.scala:169)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at scala.Option.getOrElse(Option.scala:189) 
> ~[flink-scala_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:169)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:83)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:324)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:182)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1277)
>  ~[flink-table-api-java-uber-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:862)
>  ~[flink-table-api-java-uber-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callModifyOperations(OperationExecutor.java:513)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:426)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:207)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_382]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_382]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_382]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_382]
> at java.lang.Thread.run(Thread.java:750) [?:1.8.0_382]
> 2023-09-29 14:04:11,505 ERROR 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl [] - Failed to 
> fetchResults.
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation fe1b0a58-b822-49c0-b1ae-ce73d16f92da.
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_382]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_382]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_382]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_382]
> at java.lang.Thread.run(Thread.java:750) [?:1.8.0_382]
> Caused by: java.lang.NoSuchFieldError: operands
> at org.apache.calcite.plan.RelOptRule.operand(RelOptRule.java:129) 
> ~[flink-sql-connector-hive-3.1.3_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.<init>(SimplifyFilterConditionRule.scala:36)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule$.<init>(SimplifyFilterConditionRule.scala:94)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule$.<clinit>(SimplifyFilterConditionRule.scala)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets$.<init>(FlinkStreamRuleSets.scala:35)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets$.<clinit>(FlinkStreamRuleSets.scala)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkStreamProgram$.buildProgram(FlinkStreamProgram.scala:57)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeTree$1(StreamCommonSubGraphBasedOptimizer.scala:169)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at scala.Option.getOrElse(Option.scala:189) 
> ~[flink-scala_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:169)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:83)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:324)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:182)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1277)
>  ~[flink-table-api-java-uber-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:862)
>  ~[flink-table-api-java-uber-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callModifyOperations(OperationExecutor.java:513)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:426)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:207)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> ... 7 more
> 2023-09-29 14:04:11,508 ERROR 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler [] 
> - Unhandled exception.
> org.apache.flink.table.gateway.api.utils.SqlGatewayException: 
> org.apache.flink.table.gateway.api.utils.SqlGatewayException: Failed to 
> fetchResults.
> at 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler.handleRequest(FetchResultsHandler.java:85)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler.respondToRequest(AbstractSqlGatewayRestHandler.java:84)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler.respondToRequest(AbstractSqlGatewayRestHandler.java:52)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:196)
>  ~[flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:83)
>  ~[flink-dist-1.18.0.jar:1.18.0]
> at java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_382]
> at org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:45) 
> [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:80)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:49)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:115)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:94)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:55)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:208)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:69)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>  [flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>  [flink-dist-1.18.0.jar:1.18.0]
> at java.lang.Thread.run(Thread.java:750) [?:1.8.0_382]
> Caused by: org.apache.flink.table.gateway.api.utils.SqlGatewayException: 
> Failed to fetchResults.
> at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.fetchResults(SqlGatewayServiceImpl.java:229)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler.handleRequest(FetchResultsHandler.java:83)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> ... 48 more
> Caused by: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation fe1b0a58-b822-49c0-b1ae-ce73d16f92da.
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_382]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_382]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_382]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_382]
> ... 1 more
> Caused by: java.lang.NoSuchFieldError: operands
> at org.apache.calcite.plan.RelOptRule.operand(RelOptRule.java:129) 
> ~[flink-sql-connector-hive-3.1.3_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.<init>(SimplifyFilterConditionRule.scala:36)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule$.<init>(SimplifyFilterConditionRule.scala:94)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule$.<clinit>(SimplifyFilterConditionRule.scala)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets$.<init>(FlinkStreamRuleSets.scala:35)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets$.<clinit>(FlinkStreamRuleSets.scala)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkStreamProgram$.buildProgram(FlinkStreamProgram.scala:57)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeTree$1(StreamCommonSubGraphBasedOptimizer.scala:169)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at scala.Option.getOrElse(Option.scala:189) 
> ~[flink-scala_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:169)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:83)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:324)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:182)
>  ~[flink-table-planner_2.12-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1277)
>  ~[flink-table-api-java-uber-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:862)
>  ~[flink-table-api-java-uber-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callModifyOperations(OperationExecutor.java:513)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:426)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:207)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
>  ~[flink-sql-gateway-1.18.0.jar:1.18.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_382]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_382]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_382]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_382]
> ... 1 more
> 2023-09-29 14:04:11,517 WARN  org.apache.flink.table.client.cli.CliClient     
>              [] - Could not execute SQL statement.
> org.apache.flink.table.client.gateway.SqlExecutionException: Failed to get 
> response for the operation fe1b0a58-b822-49c0-b1ae-ce73d16f92da.
> at 
> org.apache.flink.table.client.gateway.ExecutorImpl.getFetchResultResponse(ExecutorImpl.java:488)
>  ~[flink-sql-client-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.client.gateway.ExecutorImpl.fetchUtilResultsReady(ExecutorImpl.java:448)
>  ~[flink-sql-client-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.client.gateway.ExecutorImpl.executeStatement(ExecutorImpl.java:309)
>  ~[flink-sql-client-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.client.cli.parser.SqlMultiLineParser.parse(SqlMultiLineParser.java:113)
>  ~[flink-sql-client-1.18.0.jar:1.18.0]
> at org.jline.reader.impl.LineReaderImpl.acceptLine(LineReaderImpl.java:2994) 
> ~[jline-reader-3.23.0.jar:?]
> at org.jline.reader.impl.LineReaderImpl$1.apply(LineReaderImpl.java:3812) 
> ~[jline-reader-3.23.0.jar:?]
> at org.jline.reader.impl.LineReaderImpl.readLine(LineReaderImpl.java:689) 
> ~[jline-reader-3.23.0.jar:?]
> at 
> org.apache.flink.table.client.cli.CliClient.getAndExecuteStatements(CliClient.java:194)
>  [flink-sql-client-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.client.cli.CliClient.executeFile(CliClient.java:243) 
> [flink-sql-client-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.table.client.cli.CliClient.executeInNonInteractiveMode(CliClient.java:131)
>  [flink-sql-client-1.18.0.jar:1.18.0]
> at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:171) 
> [flink-sql-client-1.18.0.jar:1.18.0]
> at org.apache.flink.table.client.SqlClient.start(SqlClient.java:118) 
> [flink-sql-client-1.18.0.jar:1.18.0]
> at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:228) 
> [flink-sql-client-1.18.0.jar:1.18.0]
> at org.apache.flink.table.client.SqlClient.main(SqlClient.java:179) 
> [flink-sql-client-1.18.0.jar:1.18.0]
> Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal 
> server error., <Exception on server side:
> org.apache.flink.table.gateway.api.utils.SqlGatewayException: 
> org.apache.flink.table.gateway.api.utils.SqlGatewayException: Failed to 
> fetchResults.
> at 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler.handleRequest(FetchResultsHandler.java:85)
> at 
> org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler.respondToRequest(AbstractSqlGatewayRestHandler.java:84)
> at 
> org.apache.flink.table.gateway.rest.handler.AbstractSqlGatewayRestHandler.respondToRequest(AbstractSqlGatewayRestHandler.java:52)
> at 
> org.apache.flink.runtime.rest.handler.AbstractHandler.respondAsLeader(AbstractHandler.java:196)
> at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.lambda$channelRead0$0(LeaderRetrievalHandler.java:83)
> at java.util.Optional.ifPresent(Optional.java:159)
> at org.apache.flink.util.OptionalConsumer.ifPresent(OptionalConsumer.java:45)
> at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:80)
> at 
> org.apache.flink.runtime.rest.handler.LeaderRetrievalHandler.channelRead0(LeaderRetrievalHandler.java:49)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
> at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.routed(RouterHandler.java:115)
> at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:94)
> at 
> org.apache.flink.runtime.rest.handler.router.RouterHandler.channelRead0(RouterHandler.java:55)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
> at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
> at 
> org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:208)
> at 
> org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:69)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
> at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
> at 
> org.apache.flink.shaded.netty4.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
> at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
> at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> at java.lang.Thread.run(Thread.java:750)
> Caused by: org.apache.flink.table.gateway.api.utils.SqlGatewayException: 
> Failed to fetchResults.
> at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.fetchResults(SqlGatewayServiceImpl.java:229)
> at 
> org.apache.flink.table.gateway.rest.handler.statement.FetchResultsHandler.handleRequest(FetchResultsHandler.java:83)
> ... 48 more
> Caused by: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation fe1b0a58-b822-49c0-b1ae-ce73d16f92da.
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
> Caused by: java.lang.NoSuchFieldError: operands
> at org.apache.calcite.plan.RelOptRule.operand(RelOptRule.java:129)
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.<init>(SimplifyFilterConditionRule.scala:36)
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule$.<init>(SimplifyFilterConditionRule.scala:94)
> at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule$.<clinit>(SimplifyFilterConditionRule.scala)
> at 
> org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets$.<init>(FlinkStreamRuleSets.scala:35)
> at 
> org.apache.flink.table.planner.plan.rules.FlinkStreamRuleSets$.<clinit>(FlinkStreamRuleSets.scala)
> at 
> org.apache.flink.table.planner.plan.optimize.program.FlinkStreamProgram$.buildProgram(FlinkStreamProgram.scala:57)
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.$anonfun$optimizeTree$1(StreamCommonSubGraphBasedOptimizer.scala:169)
> at scala.Option.getOrElse(Option.scala:189)
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:169)
> at 
> org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:83)
> at 
> org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:87)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:324)
> at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:182)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1277)
> at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:862)
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callModifyOperations(OperationExecutor.java:513)
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:426)
> at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:207)
> at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> ... 7 more
>  
> End of exception on server side>]
> at 
> org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:646) 
> ~[flink-dist-1.18.0.jar:1.18.0]
> at 
> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$6(RestClient.java:626)
>  ~[flink-dist-1.18.0.jar:1.18.0]
> at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:966) 
> ~[?:1.8.0_382]
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
>  ~[?:1.8.0_382]
> at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
>  ~[?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_382]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_382]
> at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_382]
> 2023-09-29 14:04:11,528 INFO  
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog  [] - Catalog 
> mysql_bnpmp closing
> 2023-09-29 14:04:11,528 INFO  
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog  [] - Catalog 
> mysql_service1 closing
> 2023-09-29 14:04:11,528 INFO  
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog  [] - Catalog 
> svc1_mysql_test closing
> 2023-09-29 14:04:11,528 INFO  
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog  [] - Catalog 
> bnpmp_mysql_prod closing
> 2023-09-29 14:04:11,528 INFO  
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog  [] - Catalog 
> mysql_test closing
> 2023-09-29 14:04:11,528 INFO  
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog  [] - Catalog 
> bnpmp_mysql_test closing
> 2023-09-29 14:04:11,528 INFO  
> org.apache.flink.connector.jdbc.catalog.AbstractJdbcCatalog  [] - Catalog 
> svc1_mysql_prod closing
> 2023-09-29 14:04:11,536 INFO  org.apache.flink.table.catalog.hive.HiveCatalog 
>              [] - Close connection to Hive metastore
> 2023-09-29 14:04:11,546 INFO  
> org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint   [] - Shutting 
> down rest endpoint.
> 2023-09-29 14:04:11,555 INFO  
> org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint   [] - Shut down 
> complete.
> 2023-09-29 14:04:12,547 INFO  
> org.apache.flink.table.gateway.rest.SqlGatewayRestEndpoint   [] - Shutting 
> down rest endpoint.
> {code}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to