imzhouhao opened a new issue, #6115:
URL: https://github.com/apache/paimon/issues/6115

   ### Search before asking
   
   - [x] I searched in the [issues](https://github.com/apache/paimon/issues) 
and found nothing similar.
   
   
   ### Paimon version
   
   flink exception:
   2025-08-21 16:25:20,093 INFO  [flink-akka.actor.default-dispatcher-67] 
org.apache.flink.runtime.executiongraph.ExecutionGraph       [] - Source: 
Custom Source -> Map -> *anonymous_datastream_source$1*[1] -> Calc[2] -> Map -> 
Writer(write-only) : incremental_dw_kafka2paimon_testfile1 (49/400) 
(8db88e03d370372017ba62c9f04e0d82_cbc357ccb763df2852fee8c4fc7d55f2_48_0) 
switched from RUNNING to FAILED on 
container_e33_1749614779047_5381264_01_000022 @ yg-data-hdp-dn-rtyarn8724.mt 
(dataPort=17135).
   java.io.IOException: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException):
 Acquire lock failed, retry later. Id: 1343411, inode tracing: 
paimon/default.db/incremental_dw_kafka2paimon_testfile1/dt=20250821/hour=16/ctime=2025082116/bucket-0/data-824849b4-e55e-464c-8315-0b9173f06d1f-3.orc
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.fromINodeId(FSDirectory.java:956)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:920)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6820)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:1087)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:1193)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:713)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1008)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2797)
   
        at 
org.apache.paimon.flink.sink.RowDataStoreWriteOperator.processElement(RowDataStoreWriteOperator.java:144)
 ~[paimon-flink-1.16-1.2-SNAPSHOT.jar:1.2-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at StreamExecCalc$37.processElement_split3(Unknown Source) ~[?:?]
        at StreamExecCalc$37.processElement(Unknown Source) ~[?:?]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.table.runtime.operators.source.InputConversionOperator.processElement(InputConversionOperator.java:128)
 ~[flink-table-runtime-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:57)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:56)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:29)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollectWithTimestamp(StreamSourceContexts.java:423)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collectWithTimestamp(StreamSourceContexts.java:528)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$SwitchingOnClose.collectWithTimestamp(StreamSourceContexts.java:108)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:421)
 ~[?:?]
        at 
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:235)
 ~[?:?]
        at 
org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.runFetchLoop(Kafka010Fetcher.java:185)
 ~[?:?]
        at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.runWithPartitionDiscovery(FlinkKafkaConsumerBase.java:798)
 ~[?:?]
        at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:790)
 ~[?:?]
        at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:67) 
~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
        at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:333)
 ~[flink-dist-1.16.1-mt001-SNAPSHOT.jar:1.16.1-mt001-SNAPSHOT]
   Caused by: org.apache.hadoop.ipc.RemoteException: Acquire lock failed, retry 
later. Id: 1343411, inode tracing: 
paimon/default.db/incremental_dw_kafka2paimon_testfile1/dt=20250821/hour=16/ctime=2025082116/bucket-0/data-824849b4-e55e-464c-8315-0b9173f06d1f-3.orc
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.fromINodeId(FSDirectory.java:956)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:920)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6820)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updatePipeline(NameNodeRpcServer.java:1087)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updatePipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:1193)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:713)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:975)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1008)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2797)
   
        at org.apache.hadoop.ipc.Client.call(Client.java:1603) 
~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at org.apache.hadoop.ipc.Client.call(Client.java:1524) 
~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
 ~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at com.sun.proxy.$Proxy38.updatePipeline(Unknown Source) ~[?:?]
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updatePipeline(ClientNamenodeProtocolTranslatorPB.java:1096)
 ~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at sun.reflect.GeneratedMethodAccessor55.invoke(Unknown Source) ~[?:?]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_312]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_312]
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
 ~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
 ~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at com.sun.proxy.$Proxy39.updatePipeline(Unknown Source) ~[?:?]
        at sun.reflect.GeneratedMethodAccessor55.invoke(Unknown Source) ~[?:?]
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_312]
        at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_312]
        at 
org.apache.hadoop.hdfs.RpcResponseHandler.invoke(RpcResponseHandler.java:55) 
~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at com.sun.proxy.$Proxy39.updatePipeline(Unknown Source) ~[?:?]
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1616)
 ~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1249)
 ~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
        at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:741)
 ~[flink-shaded-hadoop-2-uber-2.7.1-mt-1.0.41-15.0.jar:2.7.1-mt-1.0.41-15.0]
   
   
   
   we search hdfs nn log,find a delete cmd about this file. hdfs log as follow:
   
   2025-08-21 16:25:20,056 INFO FSNamesystem.audit: allowed=true   
ugi=hadoop-rt (auth:TOKEN) via 
hadoop-launcher/hlsc-data-rt-launcher10...@sankuai.com (auth:TOKEN)      
ip=/10.224.185.36       nsid=hadoop-belugatest  cmd=delete      
src=/paimon/default.db/incremental_dw_kafka2paimon_testfile1/dt=20250821/hour=16/ctime=2025082116/bucket-0/data-824849b4-e55e-464c-8315-0b9173f06d1f-3.orc
      dst=null        perm=null       proto=rpc       
clientName=DFSClient_e33_1749614779047_5381264_01_000022_279968518_133  
clientid=5a980c05f69d4de6b0260ffbb265f1ef       callid=135      
rTime=1755764720055     qTime=0 pTime=136044    priority=3      appid=None
   2025-08-21 16:25:20,058 INFO FSNamesystem.audit: allowed=true   
ugi=hadoop-rt (auth:TOKEN) via 
hadoop-launcher/hlsc-data-rt-launcher10...@sankuai.com (auth:TOKEN)      
ip=/10.224.185.36       nsid=hadoop-belugatest  cmd=delete      
src=/paimon/default.db/incremental_dw_kafka2paimon_testfile1/dt=20250821/hour=16/ctime=2025082116/bucket-0/data-824849b4-e55e-464c-8315-0b9173f06d1f-3.orc
      dst=null        perm=null       proto=rpc       
clientName=DFSClient_e33_1749614779047_5381264_01_000022_279968518_133  
clientid=5a980c05f69d4de6b0260ffbb265f1ef       callid=136      
rTime=1755764720058     qTime=0 pTime=45616     priority=3      appid=None
   2025-08-21 16:25:20,059 INFO FSNamesystem.audit: allowed=true   
ugi=hadoop-rt (auth:TOKEN) via 
hadoop-launcher/hlsc-data-rt-launcher10...@sankuai.com (auth:TOKEN)      
ip=/10.224.185.36       nsid=hadoop-belugatest  cmd=getfileinfo 
src=/paimon/default.db/incremental_dw_kafka2paimon_testfile1/dt=20250821/hour=16/ctime=2025082116/bucket-0/data-824849b4-e55e-464c-8315-0b9173f06d1f-3.orc
      dst=null        perm=null       proto=rpc       
clientName=DFSClient_e33_1749614779047_5381264_01_000022_279968518_133  
clientid=5a980c05f69d4de6b0260ffbb265f1ef       callid=137      
rTime=1755764720059     qTime=0 pTime=47582     priority=3      appid=None
   2025-08-21 16:25:20,064 INFO FSNamesystem.audit: allowed=true   
ugi=hadoop-rt (auth:TOKEN) via 
hadoop-launcher/hlsc-data-rt-launcher10...@sankuai.com (auth:TOKEN)      
ip=/10.224.185.36       nsid=hadoop-belugatest  cmd=delete      
src=/paimon/default.db/incremental_dw_kafka2paimon_testfile1/dt=20250821/hour=16/ctime=2025082116/bucket-0/data-824849b4-e55e-464c-8315-0b9173f06d1f-3.orc
      dst=null        perm=null       proto=rpc       
clientName=DFSClient_e33_1749614779047_5381264_01_000022_279968518_133  
clientid=5a980c05f69d4de6b0260ffbb265f1ef       callid=138      
rTime=1755764720064     qTime=0 pTime=39257     priority=3      appid=None
   
   ### Compute Engine
   
   flink 1.16 on paimon 1.2.0
   
   ### Minimal reproduce step
   
   my paimon config as follow :
   CREATE TABLE if not EXISTS 
`paimon_catalog`.`default`.incremental_dw_kafka2paimon_testfile2 (
   2025-08-21 17:32:16,848 [INFO] _mt_datetime STRING COMMENT 'ID',
   2025-08-21 17:32:16,850 [INFO] _mt_servername STRING COMMENT 'ID',
   2025-08-21 17:32:16,853 [INFO] _mt_appkey STRING COMMENT 'ID',
   2025-08-21 17:32:16,855 [INFO] _mt_level STRING COMMENT 'ID',
   2025-08-21 17:32:16,858 [INFO] _mt_thread STRING COMMENT 'ID',
   2025-08-21 17:32:16,860 [INFO] _mt_action STRING COMMENT 'sequence',
   2025-08-21 17:32:16,863 [INFO] _mt_message STRING COMMENT '',
   2025-08-21 17:32:16,865 [INFO] category_type STRING COMMENT '',
   2025-08-21 17:32:16,868 [INFO] `type` STRING COMMENT 'ID',
   2025-08-21 17:32:16,870 [INFO] tags STRING COMMENT 'ID',
   2025-08-21 17:32:16,872 [INFO] kafkatime STRING COMMENT 'sequence',
   2025-08-21 17:32:16,875 [INFO] details STRING COMMENT '',
   2025-08-21 17:32:16,877 [INFO] category STRING COMMENT '',
   2025-08-21 17:32:16,880 [INFO] `value` STRING COMMENT '',
   2025-08-21 17:32:16,882 [INFO] ts STRING COMMENT '',
   2025-08-21 17:32:16,885 [INFO] `dt` STRING COMMENT '',
   2025-08-21 17:32:16,887 [INFO] `hour` STRING COMMENT '',
   2025-08-21 17:32:16,889 [INFO] `ctime` STRING COMMENT ''
   2025-08-21 17:32:16,891 [INFO] )   PARTITIONED BY (`dt`, `hour`, `ctime`)  
WITH (
   2025-08-21 17:32:16,894 [INFO] 'bucket' = '-1',
   2025-08-21 17:32:16,896 [INFO] 'write-mode' = 'none',
   2025-08-21 17:32:16,899 [INFO] 'merge-engine' = 'partial-update',
   2025-08-21 17:32:16,901 [INFO] 'changelog-producer' = 'none',
   2025-08-21 17:32:16,904 [INFO] 'write-only' = 'true',
   2025-08-21 17:32:16,907 [INFO] 'write-buffer-for-append' = 'false',
   2025-08-21 17:32:16,909 [INFO] 'write-buffer-size' = '256mb',
   2025-08-21 17:32:16,912 [INFO] 'file.format' = 'orc',
   2025-08-21 17:32:16,914 [INFO] 'orc.compress'='zlib',
   2025-08-21 17:32:16,918 [INFO] 'orc.buffer.size.enforce'='true',
   2025-08-21 17:32:16,920 [INFO] 'write.batch-size'='300',
   2025-08-21 17:32:16,923 [INFO] 'orc.compress.size'='1048576',
   2025-08-21 17:32:16,926 [INFO] 'target-file-size'='521MB',
   2025-08-21 17:32:16,929 [INFO] 'orc.stripe.size'='33554432')
   
   ### What doesn't meet your expectations?
   
   why append-scala table have delete operation
   
   ### Anything else?
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [ ] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@paimon.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to