MrAladdin opened a new issue, #11652:
URL: https://github.com/apache/hudi/issues/11652

   **Describe the problem you faced**
   
   1. "The metadata table encountered a conflict during compacting (Exception 
in running table services on metadata table) with the metadata construction of 
the deltacommit in the Hudi table, causing a rollback. It is an occasional 
exception that occurs under certain conditions. My other table's write program 
did not experience this exception, and both programs have exactly the same 
configuration parameters."
   2. "After the rollback, the metadata table has never had a compact schedule 
executed."
   3. A deltacommit remains in the REQUESTED state, and the program has not 
automatically repaired it. Currently, I hesitate to try repairing it with 
hudi-cli, fearing that the .hoodie/metadata/record_index might be automatically 
deleted and rebuilt by a running program. (I suspect that this might be 
triggered by some conditions leading to its automatic deletion and 
reconstruction due to data inconsistency).
   
   **Expected behavior**
   
![image](https://github.com/user-attachments/assets/2d38c3e5-b42d-49a5-8e05-a7679573c8d0)
   
   **Environment Description**
   
   * Hudi version : 0.15.0
   
   * Spark version : 3.5
   
   * Hive version : 3.1.2
   
   * Hadoop version : 3.1.3
   
   * Storage (HDFS/S3/GCS..) : HDFS
   
   * Running on Docker? (yes/no) : no
   
   
   **Additional context**
   
   .writeStream
     .format("hudi")
     .option("hoodie.table.base.file.format", "PARQUET")
     .option("hoodie.allow.empty.commit", "true") 
     .option("hoodie.datasource.write.drop.partition.columns", "false") 
     .option("hoodie.table.services.enabled", "true") 
     .option("hoodie.datasource.write.streaming.checkpoint.identifier", 
"lakehouse-dwd-social-kbi-beauty-v1-writer-1") 
     .option(PRECOMBINE_FIELD.key(), "date_kbiUdate") 
     .option(RECORDKEY_FIELD.key(), "records_key") 
     .option(PARTITIONPATH_FIELD.key(), "partition_index_date") 
     .option(DataSourceWriteOptions.OPERATION.key(), 
DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL) 
     .option(DataSourceWriteOptions.TABLE_TYPE.key(), 
DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL) 
     .option("hoodie.combine.before.upsert", "true")
     .option("hoodie.datasource.write.payload.class", 
"org.apache.hudi.common.model.OverwriteWithLatestAvroPayload")
     .option("hoodie.write.markers.type", "DIRECT") 
     .option("hoodie.embed.timeline.server", "true") 
     .option("hoodie.embed.timeline.server.async", "false") 
     .option("hoodie.embed.timeline.server.gzip", "true") 
     .option("hoodie.embed.timeline.server.reuse.enabled", "false") 
     .option("hoodie.filesystem.view.incr.timeline.sync.enable", "false") 
     .option("hoodie.client.heartbeat.interval_in_ms", "60000") 
     .option("hoodie.client.heartbeat.tolerable.misses", "10") 
     .option("hoodie.write.num.retries.on.conflict.failures", "6")  
     .option("hoodie.filesystem.view.remote.retry.enable", "true")
     .option("hoodie.filesystem.view.remote.timeout.secs", "1200")
     .option("hoodie.filesystem.view.remote.retry.initial_interval_ms", "500")
     .option("hoodie.filesystem.view.remote.retry.max_numbers", "8")
     .option("hoodie.filesystem.view.remote.retry.max_interval_ms", "8000")
     .option("hoodie.filesystem.operation.retry.enable","true")  
     .option("hoodie.filesystem.operation.retry.max_interval_ms", "8000")  
     .option("hoodie.filesystem.operation.retry.max_numbers", "8") 
     .option("hoodie.filesystem.operation.retry.initial_interval_ms", "500")
     .option("hoodie.schema.cache.enable", "true")
     .option("hoodie.datasource.write.streaming.ignore.failed.batch", "false")
     .option("hoodie.datasource.write.streaming.retry.count", "6") 
     .option("hoodie.datasource.write.streaming.retry.interval.ms", "3000") 
     .option("hoodie.metadata.enable", "true") 
     .option("hoodie.metadata.index.async", "false") 
     .option("hoodie.metadata.index.check.timeout.seconds", "900")
     .option("hoodie.auto.adjust.lock.configs", "true") 
     .option("hoodie.metadata.optimized.log.blocks.scan.enable", "true") 
     .option("hoodie.metadata.metrics.enable", "false") 
     .option("hoodie.metadata.compact.max.delta.commits", "20") 
     .option("hoodie.metadata.max.reader.memory", "3221225472") 
     .option("hoodie.metadata.max.reader.buffer.size", "1073741824") 
     .option("hoodie.metadata.record.index.enable", "true")
     .option("hoodie.index.type", "RECORD_INDEX") 
     .option("hoodie.record.index.use.caching", "true") 
     .option("hoodie.record.index.input.storage.level", "MEMORY_AND_DISK_SER") 
     .option("hoodie.metadata.max.init.parallelism", "100000")
     .option("hoodie.metadata.record.index.min.filegroup.count", "720")
     .option("hoodie.metadata.record.index.max.filegroup.count", "10000") 
     .option("hoodie.metadata.record.index.max.filegroup.size", "1073741824")
     .option("hoodie.metadata.auto.initialize", "true") 
     .option("hoodie.metadata.record.index.growth.factor", "2.0")
     .option("hoodie.metadata.max.logfile.size", "2147483648") 
     .option("hoodie.metadata.max.deltacommits.when.pending", "1000") 
     .option("hoodie.parquet.field_id.write.enabled", "true") 
     .option("hoodie.copyonwrite.insert.auto.split", "true") 
     .option("hoodie.record.size.estimation.threshold", "1.0") 
     .option("hoodie.parquet.block.size", "536870912") 
     .option("hoodie.parquet.max.file.size", "536870912") 
     .option("hoodie.parquet.small.file.limit", "209715200") 
     .option("hoodie.logfile.max.size", "536870912") 
     .option("hoodie.logfile.data.block.max.size", "536870912") 
     .option("hoodie.logfile.to.parquet.compression.ratio", "0.35") 
     .option("hoodie.merge.small.file.group.candidates.limit", "10")  
     .option("hoodie.keep.max.commits", "100") 
     .option("hoodie.keep.min.commits", "80") 
     .option("hoodie.commits.archival.batch", "20") 
     .option("hoodie.archive.automatic", "true") 
     .option("hoodie.archive.async", "true") 
     .option("hoodie.archive.beyond.savepoint", "true") 
     .option("hoodie.fail.on.timeline.archiving", "true") 
     .option("hoodie.archive.merge.enable", "true")
     .option("hoodie.archive.merge.files.batch.size", "10")
     .option("hoodie.archive.merge.small.file.limit.bytes", "20971520")
     .option("hoodie.clean.allow.multiple", "true") 
     .option("hoodie.cleaner.incremental.mode", "true")
     .option("hoodie.clean.async", "true")
     .option("hoodie.cleaner.policy.failed.writes", "LAZY") 
     .option("hoodie.cleaner.delete.bootstrap.base.file", "false")
     .option("hoodie.clean.automatic", "true")
     .option("hoodie.cleaner.policy", "KEEP_LATEST_BY_HOURS")
     .option("hoodie.cleaner.hours.retained", "6") 
     .option("hoodie.clean.trigger.strategy", "NUM_COMMITS")
     .option("hoodie.clean.max.commits", "10") 
     .option("hoodie.datasource.compaction.async.enable", "true")
     .option("hoodie.compact.inline", "false")
     .option("hoodie.compact.schedule.inline", "false")
     .option("hoodie.compaction.lazy.block.read", "true") 
     .option("hoodie.compaction.reverse.log.read", "false") 
     .option("hoodie.compaction.logfile.size.threshold", "314572800")
     .option("hoodie.compaction.target.io", compact_limit)
     .option("hoodie.compaction.strategy", 
"org.apache.hudi.table.action.compact.strategy.LogFileSizeBasedCompactionStrategy")
     .option("hoodie.compact.inline.trigger.strategy", "NUM_AND_TIME")
     .option("hoodie.compact.inline.max.delta.commits", "10") 
     .option("hoodie.compact.inline.max.delta.seconds", "7200")
     .option("hoodie.memory.compaction.fraction", "0.6")
     .option("hoodie.datasource.write.reconcile.schema", "true") 
     .option("hoodie.schema.on.read.enable", "false")
     .option("hoodie.write.set.null.for.missing.columns", "true")
     .option("hoodie.datasource.write.schema.allow.auto.evolution.column.drop", 
"false")
     .option("hoodie.avro.schema.external.transformation","true")
     .option("hoodie.avro.schema.validate", "true")
     .option("hoodie.write.concurrency.mode", "OPTIMISTIC_CONCURRENCY_CONTROL")
     .option("hoodie.write.lock.provider", 
"org.apache.hudi.client.transaction.lock.FileSystemBasedLockProvider")
     .option("hoodie.write.lock.filesystem.expire", "2")
     .option("hoodie.write.lock.wait_time_ms", "300000") 
     .option("hoodie.write.lock.num_retries", "15")
   
   **Stacktrace**
   
   24/07/18 21:18:49 WARN DAGScheduler: Broadcasting large task binary with 
size 1971.0 KiB
   24/07/18 21:20:31 WARN FileSystemBasedLockProvider: Delete expired lock 
file: /xxx/lakehouse/social/dwd_social_kbi_beauty_v1_hudi15/.hoodie/lock
   24/07/18 21:20:32 WARN BaseRollbackHelper: Rollback never failed and hence 
no marker dir was found. Safely moving on
   24/07/18 21:20:54 WARN BaseHoodieCompactionPlanGenerator: After filtering, 
Nothing to compact for /xxx/lakehouse/social/dwd_social_kbi_beauty_v1_hudi15
   24/07/18 21:21:34 ERROR HoodieBackedTableMetadataWriter: Exception in 
running table services on metadata table
   org.apache.hudi.exception.HoodieRollbackException: Failed to rollback 
/xxx/lakehouse/social/dwd_social_kbi_beauty_v1_hudi15/.hoodie/metadata commits 
20240718210231598
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1061)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1008)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedWrites(BaseHoodieTableServiceClient.java:935)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedWrites(BaseHoodieTableServiceClient.java:917)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedIndexingCommits(BaseHoodieTableServiceClient.java:884)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.lazyRollbackFailedIndexing(BaseHoodieWriteClient.java:1401)
        at 
org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.cleanIfNecessary(HoodieBackedTableMetadataWriter.java:1349)
        at 
org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.performTableServices(HoodieBackedTableMetadataWriter.java:1263)
        at 
org.apache.hudi.client.SparkRDDWriteClient.initializeMetadataTable(SparkRDDWriteClient.java:323)
        at 
org.apache.hudi.client.SparkRDDWriteClient.initMetadataTable(SparkRDDWriteClient.java:288)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.doInitTable(BaseHoodieWriteClient.java:1244)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1284)
        at 
org.apache.hudi.client.SparkRDDWriteClient.upsert(SparkRDDWriteClient.java:154)
        at 
org.apache.hudi.DataSourceUtils.doWriteOperation(DataSourceUtils.java:225)
        at 
org.apache.hudi.HoodieSparkSqlWriterInternal.liftedTree1$1(HoodieSparkSqlWriter.scala:492)
        at 
org.apache.hudi.HoodieSparkSqlWriterInternal.writeInternal(HoodieSparkSqlWriter.scala:490)
        at 
org.apache.hudi.HoodieSparkSqlWriterInternal.write(HoodieSparkSqlWriter.scala:187)
        at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:125)
        at 
org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$3(HoodieStreamingSink.scala:141)
        at scala.util.Try$.apply(Try.scala:210)
        at 
org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$2(HoodieStreamingSink.scala:133)
        at 
org.apache.hudi.HoodieStreamingSink.retry(HoodieStreamingSink.scala:237)
        at 
org.apache.hudi.HoodieStreamingSink.addBatch(HoodieStreamingSink.scala:132)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:732)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:729)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:427)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:425)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:67)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:729)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:286)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:427)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:425)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:67)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:249)
        at 
org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:67)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:239)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:311)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:289)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$1(StreamExecution.scala:211)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at 
org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:211)
   Caused by: org.apache.hudi.exception.HoodieIOException: Could not delete 
instant [==>20240718210231598__deltacommit__INFLIGHT] with path 
/xxx/lakehouse/social/dwd_social_kbi_beauty_v1_hudi15/.hoodie/metadata/.hoodie/20240718210231598.deltacommit.inflight
        at 
org.apache.hudi.common.table.timeline.HoodieActiveTimeline.deleteInstantFile(HoodieActiveTimeline.java:300)
        at 
org.apache.hudi.common.table.timeline.HoodieActiveTimeline.deletePending(HoodieActiveTimeline.java:242)
        at 
org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.deleteInflightAndRequestedInstant(BaseRollbackActionExecutor.java:292)
        at 
org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.finishRollback(BaseRollbackActionExecutor.java:262)
        at 
org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.runRollback(BaseRollbackActionExecutor.java:117)
        at 
org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.execute(BaseRollbackActionExecutor.java:141)
        at 
org.apache.hudi.table.HoodieSparkMergeOnReadTable.rollback(HoodieSparkMergeOnReadTable.java:218)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1044)
        ... 49 more
   24/07/18 21:21:34 ERROR HoodieStreamingSink: Micro batch id=15 threw 
following exception: 
   org.apache.hudi.exception.HoodieException: Failed to instantiate Metadata 
table 
        at 
org.apache.hudi.client.SparkRDDWriteClient.initializeMetadataTable(SparkRDDWriteClient.java:326)
        at 
org.apache.hudi.client.SparkRDDWriteClient.initMetadataTable(SparkRDDWriteClient.java:288)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.doInitTable(BaseHoodieWriteClient.java:1244)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1284)
        at 
org.apache.hudi.client.SparkRDDWriteClient.upsert(SparkRDDWriteClient.java:154)
        at 
org.apache.hudi.DataSourceUtils.doWriteOperation(DataSourceUtils.java:225)
        at 
org.apache.hudi.HoodieSparkSqlWriterInternal.liftedTree1$1(HoodieSparkSqlWriter.scala:492)
        at 
org.apache.hudi.HoodieSparkSqlWriterInternal.writeInternal(HoodieSparkSqlWriter.scala:490)
        at 
org.apache.hudi.HoodieSparkSqlWriterInternal.write(HoodieSparkSqlWriter.scala:187)
        at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:125)
        at 
org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$3(HoodieStreamingSink.scala:141)
        at scala.util.Try$.apply(Try.scala:210)
        at 
org.apache.hudi.HoodieStreamingSink.$anonfun$addBatch$2(HoodieStreamingSink.scala:133)
        at 
org.apache.hudi.HoodieStreamingSink.retry(HoodieStreamingSink.scala:237)
        at 
org.apache.hudi.HoodieStreamingSink.addBatch(HoodieStreamingSink.scala:132)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:732)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:729)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:427)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:425)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:67)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:729)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:286)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:427)
        at 
org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:425)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:67)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:249)
        at 
org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:67)
        at 
org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:239)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:311)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:289)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$1(StreamExecution.scala:211)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at 
org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94)
        at 
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:211)
   Caused by: org.apache.hudi.exception.HoodieRollbackException: Failed to 
rollback /xxx/lakehouse/social/dwd_social_kbi_beauty_v1_hudi15/.hoodie/metadata 
commits 20240718210231598
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1061)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1008)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedWrites(BaseHoodieTableServiceClient.java:935)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedWrites(BaseHoodieTableServiceClient.java:917)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedIndexingCommits(BaseHoodieTableServiceClient.java:884)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.lazyRollbackFailedIndexing(BaseHoodieWriteClient.java:1401)
        at 
org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.cleanIfNecessary(HoodieBackedTableMetadataWriter.java:1349)
        at 
org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.performTableServices(HoodieBackedTableMetadataWriter.java:1263)
        at 
org.apache.hudi.client.SparkRDDWriteClient.initializeMetadataTable(SparkRDDWriteClient.java:323)
        ... 41 more
   Caused by: org.apache.hudi.exception.HoodieIOException: Could not delete 
instant [==>20240718210231598__deltacommit__INFLIGHT] with path 
/xxx/lakehouse/social/dwd_social_kbi_beauty_v1_hudi15/.hoodie/metadata/.hoodie/20240718210231598.deltacommit.inflight
        at 
org.apache.hudi.common.table.timeline.HoodieActiveTimeline.deleteInstantFile(HoodieActiveTimeline.java:300)
        at 
org.apache.hudi.common.table.timeline.HoodieActiveTimeline.deletePending(HoodieActiveTimeline.java:242)
        at 
org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.deleteInflightAndRequestedInstant(BaseRollbackActionExecutor.java:292)
        at 
org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.finishRollback(BaseRollbackActionExecutor.java:262)
        at 
org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.runRollback(BaseRollbackActionExecutor.java:117)
        at 
org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.execute(BaseRollbackActionExecutor.java:141)
        at 
org.apache.hudi.table.HoodieSparkMergeOnReadTable.rollback(HoodieSparkMergeOnReadTable.java:218)
        at 
org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1044)
        ... 49 more
   24/07/18 21:24:01 WARN DAGScheduler: Broadcasting large task binary with 
size 1027.1 KiB
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to