Sigal0 opened a new issue, #9978:
URL: https://github.com/apache/seatunnel/issues/9978

   ### Search before asking
   
   - [x] I had searched in the 
[issues](https://github.com/apache/seatunnel/issues?q=is%3Aissue+label%3A%22bug%22)
 and found no similar issues.
   
   
   ### What happened
   
   When I used this config file to synchronize a MySQL table with 450+ columns 
and 200,000+ rows to Hudi, the console reported the following error:
   
   org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieUpsertException: Failed to close UpdateHandle
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:151)
           at org.apache.hudi.table.HoodieTable.runMerge(HoodieTable.java:1099)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdateInternal(BaseJavaCommitActionExecutor.java:275)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdate(BaseJavaCommitActionExecutor.java:270)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpsertPartition(BaseJavaCommitActionExecutor.java:243)
   
   
   
   ### SeaTunnel Version
   
   version :2.3.12
   engine :zeta
   
   ### SeaTunnel Config
   
   ```conf
   `
   env {
     job.mode = "BATCH" 
     job.name="CJ_557_24_aws_tdms_id_import11" 
     parallelism=10 
     read_limit.bytes_per_second=4000000
    
    }
   source { 
      Jdbc {
           url = 
"jdbc:mysql://1.1.1.1:3306/aws?useUnicode=true&allowMultiQueries=true&characterEncoding=utf8&autoReconnect=true&zeroDateTimeBehavior=convertToNull&transformedBitIsBoolean=true&allowPublicKeyRetrieval=true&nullCatalogMeansCurrent=true&serverTimezone=Asia/Shanghai&useSSL=false"
           driver = "com.mysql.cj.jdbc.Driver"
           user = "q1awe"
           password = "11111"
           table_path="aws.tdms_id_import"
           query ="select * from aws.tdms_id_import"   
           partition_column="id"
           split.size=8096
    
      }
   }
   sink {
     Hudi {
       table_dfs_path = "hdfs://bgdc:8020/user/hudi/warehouse"
       conf_files_path = 
"/opt/module/seatunnel/apache-seatunnel-2.3.12/deploy/hdfs-site.xml"
       table_name = "tdms_id_import_667"
       database = "ods"
       
        table_type="COPY_ON_WRITE"
    op_type="upsert"
    record_key_fields="id"
    precombine_field="update_time"
    batch_size=5000
    insert_shuffle_parallelism=10
    upsert_shuffle_parallelism=10
   
     }
   }
   
   `
   ```
   
   ### Running Command
   
   ```shell
   /opt/module/seatunnel/apache-seatunnel-2.3.12/bin/seatunnel.sh --config 
test1.conf
   ```
   
   ### Error Exception
   
   ```log
   2025-10-23 19:48:19,173 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
Fatal Error,
   
   2025-10-23 19:48:19,173 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
Please submit bug report in https://github.com/apache/seatunnel/issues
   
   2025-10-23 19:48:19,173 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
Reason:SeaTunnel job executed failed
   
   2025-10-23 19:48:19,181 ERROR [o.a.s.c.s.SeaTunnel           ] [main] - 
Exception 
StackTrace:org.apache.seatunnel.core.starter.exception.CommandExecuteException: 
SeaTunnel job executed failed
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:228)
           at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
           at 
org.apache.seatunnel.core.starter.seatunnel.SeaTunnelClient.main(SeaTunnelClient.java:40)
   Caused by: 
org.apache.seatunnel.engine.common.exception.SeaTunnelEngineException: 
java.lang.RuntimeException: java.lang.RuntimeException: table 
zzzzpoc.tdms_id_import sink throw error
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:303)
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:70)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:39)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:27)
           at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.handleRecord(IntermediateBlockingQueue.java:82)
           at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.collect(IntermediateBlockingQueue.java:56)
           at 
org.apache.seatunnel.engine.server.task.flow.IntermediateQueueFlowLifeCycle.collect(IntermediateQueueFlowLifeCycle.java:51)
           at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.collect(TransformSeaTunnelTask.java:72)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.stateProcess(SeaTunnelTask.java:165)
           at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.call(TransformSeaTunnelTask.java:77)
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$BlockingWorker.run(TaskExecutionService.java:679)
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$NamedTaskWrapper.run(TaskExecutionService.java:1008)
           at 
org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43)
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Caused by: java.lang.RuntimeException: table zzzzpoc.tdms_id_import sink 
throw error
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.subSinkErrorCheck(MultiTableSinkWriter.java:140)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.write(MultiTableSinkWriter.java:192)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.write(MultiTableSinkWriter.java:47)
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:269)
           ... 17 more
   Caused by: 
org.apache.seatunnel.connectors.seatunnel.hudi.exception.HudiConnectorException:
 ErrorCode:[COMMON-11], ErrorDescription:[Sink writer operation failed, such as 
(open, close) etc...] - Writing records to Hudi failed.
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.writeRecord(HudiRecordWriter.java:123)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiSinkWriter.write(HudiSinkWriter.java:75)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiSinkWriter.write(HudiSinkWriter.java:40)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableWriterRunnable.run(MultiTableWriterRunnable.java:67)
           ... 6 more
   Caused by: org.apache.hudi.exception.HoodieUpsertException: Error upserting 
bucketType UPDATE for partition :0
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpsertPartition(BaseJavaCommitActionExecutor.java:250)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.lambda$execute$0(BaseJavaCommitActionExecutor.java:121)
           at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.execute(BaseJavaCommitActionExecutor.java:119)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.execute(BaseJavaCommitActionExecutor.java:70)
           at 
org.apache.hudi.table.action.commit.BaseWriteHelper.write(BaseWriteHelper.java:58)
           at 
org.apache.hudi.table.action.commit.JavaUpsertCommitActionExecutor.execute(JavaUpsertCommitActionExecutor.java:46)
           at 
org.apache.hudi.table.HoodieJavaCopyOnWriteTable.upsert(HoodieJavaCopyOnWriteTable.java:100)
           at 
org.apache.hudi.table.HoodieJavaCopyOnWriteTable.upsert(HoodieJavaCopyOnWriteTable.java:84)
           at 
org.apache.hudi.client.HoodieJavaWriteClient.upsert(HoodieJavaWriteClient.java:113)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.executeWrite(HudiRecordWriter.java:175)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.flush(HudiRecordWriter.java:157)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.writeRecord(HudiRecordWriter.java:120)
           ... 9 more
   Caused by: org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieUpsertException: Failed to close UpdateHandle
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:151)
           at org.apache.hudi.table.HoodieTable.runMerge(HoodieTable.java:1099)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdateInternal(BaseJavaCommitActionExecutor.java:275)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdate(BaseJavaCommitActionExecutor.java:270)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpsertPartition(BaseJavaCommitActionExecutor.java:243)
           ... 21 more
   Caused by: org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieUpsertException: Failed to close UpdateHandle
           at 
org.apache.hudi.common.util.queue.SimpleExecutor.execute(SimpleExecutor.java:75)
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:149)
           ... 25 more
   Caused by: org.apache.hudi.exception.HoodieUpsertException: Failed to close 
UpdateHandle
           at 
org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:455)
           at 
org.apache.hudi.table.action.commit.BaseMergeHelper$UpdateHandler.finish(BaseMergeHelper.java:59)
           at 
org.apache.hudi.table.action.commit.BaseMergeHelper$UpdateHandler.finish(BaseMergeHelper.java:44)
           at 
org.apache.hudi.common.util.queue.SimpleExecutor.execute(SimpleExecutor.java:72)
           ... 26 more
   Caused by: java.io.FileNotFoundException: File does not exist: 
/user/hudi/warehouse/qomolh_ods/tdms_id_import_667/556961e9-5556-44c1-88da-09f9e48d0cd3-0_0-0-0_20251023194812759.parquet
 (inode 577469) Holder DFSClient_NONMAPREDUCE_-1028297201_224 does not have any 
open files.
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2840)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2719)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
   
           at sun.reflect.GeneratedConstructorAccessor73.newInstance(Unknown 
Source)
           at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
           at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
           at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
           at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
           at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
           at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1866)
           at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1668)
           at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
   Caused by: 
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does 
not exist: 
/user/hudi/warehouse/qomolh_ods/tdms_id_import_667/556961e9-5556-44c1-88da-09f9e48d0cd3-0_0-0-0_20251023194812759.parquet
 (inode 577469) Holder DFSClient_NONMAPREDUCE_-1028297201_224 does not have any 
open files.
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2840)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2719)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
   
           at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
           at org.apache.hadoop.ipc.Client.call(Client.java:1508)
           at org.apache.hadoop.ipc.Client.call(Client.java:1405)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
           at com.sun.proxy.$Proxy38.addBlock(Unknown Source)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:514)
           at sun.reflect.GeneratedMethodAccessor104.invoke(Unknown Source)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
           at com.sun.proxy.$Proxy39.addBlock(Unknown Source)
           at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1081)
           ... 3 more
   
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:220)
           ... 2 more
   
   2025-10-23 19:48:19,182 ERROR [o.a.s.c.s.SeaTunnel           ] [main] -
   
===============================================================================
   
   
   
   Exception in thread "main" 
org.apache.seatunnel.core.starter.exception.CommandExecuteException: SeaTunnel 
job executed failed
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:228)
           at org.apache.seatunnel.core.starter.SeaTunnel.run(SeaTunnel.java:40)
           at 
org.apache.seatunnel.core.starter.seatunnel.SeaTunnelClient.main(SeaTunnelClient.java:40)
   Caused by: 
org.apache.seatunnel.engine.common.exception.SeaTunnelEngineException: 
java.lang.RuntimeException: java.lang.RuntimeException: table 
zzzzpoc.tdms_id_import sink throw error
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:303)
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:70)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:39)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTransformCollector.collect(SeaTunnelTransformCollector.java:27)
           at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.handleRecord(IntermediateBlockingQueue.java:82)
           at 
org.apache.seatunnel.engine.server.task.group.queue.IntermediateBlockingQueue.collect(IntermediateBlockingQueue.java:56)
           at 
org.apache.seatunnel.engine.server.task.flow.IntermediateQueueFlowLifeCycle.collect(IntermediateQueueFlowLifeCycle.java:51)
           at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.collect(TransformSeaTunnelTask.java:72)
           at 
org.apache.seatunnel.engine.server.task.SeaTunnelTask.stateProcess(SeaTunnelTask.java:165)
           at 
org.apache.seatunnel.engine.server.task.TransformSeaTunnelTask.call(TransformSeaTunnelTask.java:77)
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$BlockingWorker.run(TaskExecutionService.java:679)
           at 
org.apache.seatunnel.engine.server.TaskExecutionService$NamedTaskWrapper.run(TaskExecutionService.java:1008)
           at 
org.apache.seatunnel.api.tracing.MDCRunnable.run(MDCRunnable.java:43)
           at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
           at java.util.concurrent.FutureTask.run(FutureTask.java:266)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   Caused by: java.lang.RuntimeException: table zzzzpoc.tdms_id_import sink 
throw error
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.subSinkErrorCheck(MultiTableSinkWriter.java:140)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.write(MultiTableSinkWriter.java:192)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableSinkWriter.write(MultiTableSinkWriter.java:47)
           at 
org.apache.seatunnel.engine.server.task.flow.SinkFlowLifeCycle.received(SinkFlowLifeCycle.java:269)
           ... 17 more
   Caused by: 
org.apache.seatunnel.connectors.seatunnel.hudi.exception.HudiConnectorException:
 ErrorCode:[COMMON-11], ErrorDescription:[Sink writer operation failed, such as 
(open, close) etc...] - Writing records to Hudi failed.
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.writeRecord(HudiRecordWriter.java:123)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiSinkWriter.write(HudiSinkWriter.java:75)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiSinkWriter.write(HudiSinkWriter.java:40)
           at 
org.apache.seatunnel.api.sink.multitablesink.MultiTableWriterRunnable.run(MultiTableWriterRunnable.java:67)
           ... 6 more
   Caused by: org.apache.hudi.exception.HoodieUpsertException: Error upserting 
bucketType UPDATE for partition :0
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpsertPartition(BaseJavaCommitActionExecutor.java:250)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.lambda$execute$0(BaseJavaCommitActionExecutor.java:121)
           at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.execute(BaseJavaCommitActionExecutor.java:119)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.execute(BaseJavaCommitActionExecutor.java:70)
           at 
org.apache.hudi.table.action.commit.BaseWriteHelper.write(BaseWriteHelper.java:58)
           at 
org.apache.hudi.table.action.commit.JavaUpsertCommitActionExecutor.execute(JavaUpsertCommitActionExecutor.java:46)
           at 
org.apache.hudi.table.HoodieJavaCopyOnWriteTable.upsert(HoodieJavaCopyOnWriteTable.java:100)
           at 
org.apache.hudi.table.HoodieJavaCopyOnWriteTable.upsert(HoodieJavaCopyOnWriteTable.java:84)
           at 
org.apache.hudi.client.HoodieJavaWriteClient.upsert(HoodieJavaWriteClient.java:113)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.executeWrite(HudiRecordWriter.java:175)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.flush(HudiRecordWriter.java:157)
           at 
org.apache.seatunnel.connectors.seatunnel.hudi.sink.writer.HudiRecordWriter.writeRecord(HudiRecordWriter.java:120)
           ... 9 more
   Caused by: org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieUpsertException: Failed to close UpdateHandle
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:151)
           at org.apache.hudi.table.HoodieTable.runMerge(HoodieTable.java:1099)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdateInternal(BaseJavaCommitActionExecutor.java:275)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpdate(BaseJavaCommitActionExecutor.java:270)
           at 
org.apache.hudi.table.action.commit.BaseJavaCommitActionExecutor.handleUpsertPartition(BaseJavaCommitActionExecutor.java:243)
           ... 21 more
   Caused by: org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieUpsertException: Failed to close UpdateHandle
           at 
org.apache.hudi.common.util.queue.SimpleExecutor.execute(SimpleExecutor.java:75)
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:149)
           ... 25 more
   Caused by: org.apache.hudi.exception.HoodieUpsertException: Failed to close 
UpdateHandle
           at 
org.apache.hudi.io.HoodieMergeHandle.close(HoodieMergeHandle.java:455)
           at 
org.apache.hudi.table.action.commit.BaseMergeHelper$UpdateHandler.finish(BaseMergeHelper.java:59)
           at 
org.apache.hudi.table.action.commit.BaseMergeHelper$UpdateHandler.finish(BaseMergeHelper.java:44)
           at 
org.apache.hudi.common.util.queue.SimpleExecutor.execute(SimpleExecutor.java:72)
           ... 26 more
   Caused by: java.io.FileNotFoundException: File does not exist: 
/user/hudi/warehouse/qomolh_ods/tdms_id_import_667/556961e9-5556-44c1-88da-09f9e48d0cd3-0_0-0-0_20251023194812759.parquet
 (inode 577469) Holder DFSClient_NONMAPREDUCE_-1028297201_224 does not have any 
open files.
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2840)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2719)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
   
           at sun.reflect.GeneratedConstructorAccessor73.newInstance(Unknown 
Source)
           at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
           at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
           at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
           at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
           at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1084)
           at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1866)
           at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1668)
           at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
   Caused by: 
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does 
not exist: 
/user/hudi/warehouse/qomolh_ods/tdms_id_import_667/556961e9-5556-44c1-88da-09f9e48d0cd3-0_0-0-0_20251023194812759.parquet
 (inode 577469) Holder DFSClient_NONMAPREDUCE_-1028297201_224 does not have any 
open files.
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2840)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
           at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
           at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2719)
           at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
           at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
           at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
           at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
           at java.security.AccessController.doPrivileged(Native Method)
           at javax.security.auth.Subject.doAs(Subject.java:422)
           at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
           at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)
   
           at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
           at org.apache.hadoop.ipc.Client.call(Client.java:1508)
           at org.apache.hadoop.ipc.Client.call(Client.java:1405)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
           at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
           at com.sun.proxy.$Proxy38.addBlock(Unknown Source)
           at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:514)
           at sun.reflect.GeneratedMethodAccessor104.invoke(Unknown Source)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
           at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
           at com.sun.proxy.$Proxy39.addBlock(Unknown Source)
           at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1081)
           ... 3 more
   
           at 
org.apache.seatunnel.core.starter.seatunnel.command.ClientExecuteCommand.execute(ClientExecuteCommand.java:220)
           ... 2 more
   ```
   
   ### Zeta or Flink or Spark Version
   
   zeta
   
   ### Java or Scala Version
   
   java: 8
   scala: 2.12.10
   
   ### Screenshots
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [x] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to