shashihoushengqia opened a new issue, #9374:
URL: https://github.com/apache/gravitino/issues/9374

   ### Version
   
   main branch
   
   ### Describe what's wrong
   
   When I was using apache spark to query mysql, I found that the column of 
mysql data type datetime could not be used as a filtering condition and would 
directly report an error. Trying with other columns could result in a normal 
query
   
   ### Error message and/or stacktrace
   
   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 37.0 failed 4 times, most recent failure: Lost task 0.3 in stage 37.0 
(TID 49) (10.42.1.130 executor 10): java.sql.SQLSyntaxErrorException: You have 
an error in your SQL syntax; check the manual that corresponds to your MySQL 
server version for the right syntax to use near ':07:43)' at line 1
                at 
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121)
                at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
                at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:916)
                at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:972)
                at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:304)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
                at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
                at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
                at org.apache.spark.scheduler.Task.run(Task.scala:141)
                at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:621)
                at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
                at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
                at 
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
                at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:624)
                at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
                at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
                at java.base/java.lang.Thread.run(Unknown Source)
   
        Driver stacktrace:
                at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2898)
                at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2834)
                at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2833)
                at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
                at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
                at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
                at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2833)
                at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1253)
                at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1253)
                at scala.Option.foreach(Option.scala:407)
                at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1253)
                at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3102)
                at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3036)
                at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3025)
                at 
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
                at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:995)
                at org.apache.spark.SparkContext.runJob(SparkContext.scala:2393)
                at org.apache.spark.SparkContext.runJob(SparkContext.scala:2414)
                at org.apache.spark.SparkContext.runJob(SparkContext.scala:2433)
                at org.apache.spark.rdd.RDD.collectPartition$1(RDD.scala:1064)
                at 
org.apache.spark.rdd.RDD.$anonfun$toLocalIterator$3(RDD.scala:1066)
                at 
org.apache.spark.rdd.RDD.$anonfun$toLocalIterator$3$adapted(RDD.scala:1066)
                at 
scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
                at 
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
                at 
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
                at 
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
                at 
org.apache.kyuubi.operation.IterableFetchIterator.hasNext(FetchIterator.scala:97)
                at 
scala.collection.Iterator$SliceIterator.hasNext(Iterator.scala:268)
                at scala.collection.Iterator.toStream(Iterator.scala:1417)
                at scala.collection.Iterator.toStream$(Iterator.scala:1416)
                at 
scala.collection.AbstractIterator.toStream(Iterator.scala:1431)
                at 
scala.collection.TraversableOnce.toSeq(TraversableOnce.scala:354)
                at 
scala.collection.TraversableOnce.toSeq$(TraversableOnce.scala:354)
                at scala.collection.AbstractIterator.toSeq(Iterator.scala:1431)
                at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.$anonfun$getNextRowSetInternal$1(SparkOperation.scala:285)
                at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
                at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.$anonfun$withLocalProperties$1(SparkOperation.scala:174)
                at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
                at 
jdk.internal.reflect.GeneratedMethodAccessor247.invoke(Unknown Source)
                at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
Source)
                at java.base/java.lang.reflect.Method.invoke(Unknown Source)
                at 
org.apache.kyuubi.util.reflect.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:59)
                at 
org.apache.kyuubi.util.reflect.DynMethods$BoundMethod.invokeChecked(DynMethods.java:171)
                at 
org.apache.spark.sql.execution.SparkSQLExecutionHelper$.withSQLConfPropagated(SparkSQLExecutionHelper.scala:37)
                at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.withLocalProperties(SparkOperation.scala:158)
                at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.getNextRowSetInternal(SparkOperation.scala:262)
                at 
org.apache.kyuubi.operation.AbstractOperation.$anonfun$getNextRowSet$1(AbstractOperation.scala:203)
                at org.apache.kyuubi.Utils$.withLockRequired(Utils.scala:392)
                at 
org.apache.kyuubi.operation.AbstractOperation.withLockRequired(AbstractOperation.scala:52)
                at 
org.apache.kyuubi.operation.AbstractOperation.getNextRowSet(AbstractOperation.scala:203)
                at 
org.apache.kyuubi.operation.OperationManager.getOperationNextRowSet(OperationManager.scala:140)
                at 
org.apache.kyuubi.session.AbstractSession.fetchResults(AbstractSession.scala:243)
                at 
org.apache.kyuubi.service.AbstractBackendService.fetchResults(AbstractBackendService.scala:213)
                at 
org.apache.kyuubi.service.TFrontendService.FetchResults(TFrontendService.scala:530)
                at 
org.apache.kyuubi.shaded.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:2020)
                at 
org.apache.kyuubi.shaded.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:2000)
                at 
org.apache.kyuubi.shaded.thrift.ProcessFunction.process(ProcessFunction.java:38)
                at 
org.apache.kyuubi.shaded.thrift.TBaseProcessor.process(TBaseProcessor.java:38)
                at 
org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:35)
                at 
org.apache.kyuubi.shaded.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:250)
                at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
                at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
                at java.base/java.lang.Thread.run(Unknown Source)
        Caused by: java.sql.SQLSyntaxErrorException: You have an error in your 
SQL syntax; check the manual that corresponds to your MySQL server version for 
the right syntax to use near ':07:43)' at line 1
                at 
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121)
                at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
                at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:916)
                at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:972)
                at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:304)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
                at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
                at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
                at org.apache.spark.scheduler.Task.run(Task.scala:141)
                at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:621)
                at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
                at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
                at 
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
                at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:624)
                ... 3 more
   org.apache.kyuubi.KyuubiSQLException:org.apache.spark.SparkException: Job 
aborted due to stage failure: Task 0 in stage 37.0 failed 4 times, most recent 
failure: Lost task 0.3 in stage 37.0 (TID 49) (10.42.1.130 executor 10): 
java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check 
the manual that corresponds to your MySQL server version for the right syntax 
to use near ':07:43)' at line 1
        at 
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121)
        at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
        at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:916)
        at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:972)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:304)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
        at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
        at org.apache.spark.scheduler.Task.run(Task.scala:141)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:621)
        at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
        at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:624)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)
   
   Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2898)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2834)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2833)
        at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2833)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1253)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1253)
        at scala.Option.foreach(Option.scala:407)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1253)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3102)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3036)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3025)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:995)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2393)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2414)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:2433)
        at org.apache.spark.rdd.RDD.collectPartition$1(RDD.scala:1064)
        at org.apache.spark.rdd.RDD.$anonfun$toLocalIterator$3(RDD.scala:1066)
        at 
org.apache.spark.rdd.RDD.$anonfun$toLocalIterator$3$adapted(RDD.scala:1066)
        at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
        at 
org.apache.kyuubi.operation.IterableFetchIterator.hasNext(FetchIterator.scala:97)
        at scala.collection.Iterator$SliceIterator.hasNext(Iterator.scala:268)
        at scala.collection.Iterator.toStream(Iterator.scala:1417)
        at scala.collection.Iterator.toStream$(Iterator.scala:1416)
        at scala.collection.AbstractIterator.toStream(Iterator.scala:1431)
        at scala.collection.TraversableOnce.toSeq(TraversableOnce.scala:354)
        at scala.collection.TraversableOnce.toSeq$(TraversableOnce.scala:354)
        at scala.collection.AbstractIterator.toSeq(Iterator.scala:1431)
        at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.$anonfun$getNextRowSetInternal$1(SparkOperation.scala:285)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.$anonfun$withLocalProperties$1(SparkOperation.scala:174)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
        at jdk.internal.reflect.GeneratedMethodAccessor247.invoke(Unknown 
Source)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
Source)
        at java.base/java.lang.reflect.Method.invoke(Unknown Source)
        at 
org.apache.kyuubi.util.reflect.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:59)
        at 
org.apache.kyuubi.util.reflect.DynMethods$BoundMethod.invokeChecked(DynMethods.java:171)
        at 
org.apache.spark.sql.execution.SparkSQLExecutionHelper$.withSQLConfPropagated(SparkSQLExecutionHelper.scala:37)
        at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.withLocalProperties(SparkOperation.scala:158)
        at 
org.apache.kyuubi.engine.spark.operation.SparkOperation.getNextRowSetInternal(SparkOperation.scala:262)
        at 
org.apache.kyuubi.operation.AbstractOperation.$anonfun$getNextRowSet$1(AbstractOperation.scala:203)
        at org.apache.kyuubi.Utils$.withLockRequired(Utils.scala:392)
        at 
org.apache.kyuubi.operation.AbstractOperation.withLockRequired(AbstractOperation.scala:52)
        at 
org.apache.kyuubi.operation.AbstractOperation.getNextRowSet(AbstractOperation.scala:203)
        at 
org.apache.kyuubi.operation.OperationManager.getOperationNextRowSet(OperationManager.scala:140)
        at 
org.apache.kyuubi.session.AbstractSession.fetchResults(AbstractSession.scala:243)
        at 
org.apache.kyuubi.service.AbstractBackendService.fetchResults(AbstractBackendService.scala:213)
        at 
org.apache.kyuubi.service.TFrontendService.FetchResults(TFrontendService.scala:530)
        at 
org.apache.kyuubi.shaded.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:2020)
        at 
org.apache.kyuubi.shaded.hive.service.rpc.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:2000)
        at 
org.apache.kyuubi.shaded.thrift.ProcessFunction.process(ProcessFunction.java:38)
        at 
org.apache.kyuubi.shaded.thrift.TBaseProcessor.process(TBaseProcessor.java:38)
        at 
org.apache.kyuubi.service.authentication.TSetIpAddressProcessor.process(TSetIpAddressProcessor.scala:35)
        at 
org.apache.kyuubi.shaded.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:250)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown 
Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown 
Source)
        at java.base/java.lang.Thread.run(Unknown Source)
   Caused by: java.sql.SQLSyntaxErrorException: You have an error in your SQL 
syntax; check the manual that corresponds to your MySQL server version for the 
right syntax to use near ':07:43)' at line 1
        at 
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:121)
        at 
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
        at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:916)
        at 
com.mysql.cj.jdbc.ClientPreparedStatement.executeQuery(ClientPreparedStatement.java:972)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD.compute(JDBCRDD.scala:304)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
        at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
        at org.apache.spark.scheduler.Task.run(Task.scala:141)
        at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:621)
        at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
        at 
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:624)
        ... 3 more
   
   ### How to reproduce
   
   executed in mysql:
   create table db_dev.tab_test01
   (
   stu_id bigint comment 'id' primary key auto_increment,
   stu_name varchar(255) comment '姓名',
   stu_sex varchar(1) comment '性别',
   create_time datetime default current_timestamp comment '创建时间',
   update_time datetime default current_timestamp on update current_timestamp 
comment '更新时间'
   ) comment '测试01表'
   ;
   insert into db_dev.tab_test01 (stu_name, stu_sex) values('xu', '1'), ('张三', 
2), ('李四', 3);
   
   Execute the first spark sql:
   select * from mysql_catalog.db_dev.tab_test01 where update_time >= 
'2025-12-04 09:07:43';
   report an error
   
   Execute the next spark sql:
   select * from mysql_catalog.db_dev.tab_test01 where stu_id >= 2;
   normal execution
   
   
   ### Additional context
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to