[ 
https://issues.apache.org/jira/browse/HUDI-8933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17937232#comment-17937232
 ] 

sivabalan narayanan commented on HUDI-8933:
-------------------------------------------

tried w/ latest master. not reproducible. 

 
{code:java}
scala> df1.write.format("hudi").
     |   options(getQuickstartWriteConfigs).
     |   option("hoodie.write.table.version","6").
     |   option("hoodie.write.auto.upgrade","false").
     |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
     |   option("hoodie.datasource.write.table.type","MERGE_ON_READ").
     |   option("hoodie.parquet.small.file.limit","0").
     |   option("hoodie.compact.inline.max.delta.commits","1").
     |   option("hoodie.metadata.compact.max.delta.commits","5").
     |   option(TABLE_NAME, tableName).
     |   mode(Append).
     |   save(basePath)
warning: one deprecation; for details, enable `:setting -deprecation' or 
`:replay -deprecation'
25/03/20 15:43:56 WARN ConfigUtils: The configuration key 
'hoodie.compaction.record.merger.strategy' has been deprecated and may be 
removed in the future. Please use the new key 'hoodie.record.merge.strategy.id' 
instead.
# WARNING: Unable to attach Serviceability Agent. Unable to attach even with 
module exceptions: [org.apache.hudi.org.openjdk.jol.vm.sa.SASupportException: 
Sense failed., org.apache.hudi.org.openjdk.jol.vm.sa.SASupportException: Sense 
failed., org.apache.hudi.org.openjdk.jol.vm.sa.SASupportException: Sense 
failed.]
25/03/20 15:43:58 WARN ConfigUtils: The configuration key 
'hoodie.compaction.record.merger.strategy' has been deprecated and may be 
removed in the future. Please use the new key 'hoodie.record.merge.strategy.id' 
instead.
25/03/20 15:43:58 WARN ConfigUtils: The configuration key 
'hoodie.compaction.record.merger.strategy' has been deprecated and may be 
removed in the future. Please use the new key 'hoodie.record.merge.strategy.id' 
instead.
25/03/20 15:43:58 WARN HoodieTableConfig: The precombine or ordering field (ts) 
is specified. COMMIT_TIME_ORDERING merge mode does not use precombine or 
ordering field anymore.
25/03/20 15:43:59 WARN HoodieWriteConfig: HoodieTableVersion.SIX is not yet 
fully supported by the writer. Please expect some unexpected behavior, until 
its fully implemented.
25/03/20 15:43:59 WARN HoodieWriteConfig: HoodieTableVersion.SIX is not yet 
fully supported by the writer. Please expect some unexpected behavior, until 
its fully implemented.
25/03/20 15:43:59 WARN HoodieBackedTableMetadataWriter: Cannot initialize 
metadata table as operation(s) are in progress on the dataset: 
[20250320154100610, 20250320154359534]
25/03/20 15:43:59 ERROR HoodieBackedTableMetadataWriter: Failed to initialize 
MDT from filesystem
25/03/20 15:44:03 WARN HoodieWriteConfig: HoodieTableVersion.SIX is not yet 
fully supported by the writer. Please expect some unexpected behavior, until 
its fully implemented.
25/03/20 15:44:03 WARN HoodieBackedTableMetadataWriter: Cannot initialize 
metadata table as operation(s) are in progress on the dataset: 
[20250320154100610, 20250320154359534]
25/03/20 15:44:03 ERROR HoodieBackedTableMetadataWriter: Failed to initialize 
MDT from filesystem
25/03/20 15:44:03 WARN HoodieWriteConfig: HoodieTableVersion.SIX is not yet 
fully supported by the writer. Please expect some unexpected behavior, until 
its fully implemented.
25/03/20 15:44:03 WARN HoodieBackedTableMetadataWriter: Cannot initialize 
metadata table as operation(s) are in progress on the dataset: 
[20250320154100610, 20250320154403533]
25/03/20 15:44:03 ERROR HoodieBackedTableMetadataWriter: Failed to initialize 
MDT from filesystem
25/03/20 15:44:03 WARN BaseRollbackActionExecutor: Rollback finished without 
deleting inflight instant file. 
Instant=[==>20250320154100610__compaction__INFLIGHT]
25/03/20 15:44:04 WARN HoodieWriteConfig: HoodieTableVersion.SIX is not yet 
fully supported by the writer. Please expect some unexpected behavior, until 
its fully implemented.
25/03/20 15:44:05 WARN HoodieTableMetadataUtil: Reading log file: 
americas/united_states/san_francisco/.f4dac1b2-5f34-4888-a616-f4e4a59d4fd7-0_20250320154045224.log.1_1-89-117,
 to build column range metadata.
25/03/20 15:44:05 WARN HoodieTableMetadataUtil: Reading log file: 
americas/brazil/sao_paulo/.17085c98-7133-4121-a775-2f19ab83267f-0_20250320154031260.log.1_0-52-67,
 to build column range metadata.
25/03/20 15:44:05 WARN HoodieTableMetadataUtil: Reading log file: 
americas/united_states/san_francisco/.f4dac1b2-5f34-4888-a616-f4e4a59d4fd7-0_20250320154031260.log.1_1-52-68,
 to build column range metadata.
25/03/20 15:44:05 WARN HoodieTableMetadataUtil: Reading log file: 
americas/brazil/sao_paulo/.17085c98-7133-4121-a775-2f19ab83267f-0_20250320154045224.log.1_0-89-116,
 to build column range metadata.
25/03/20 15:44:05 WARN HoodieTableMetadataUtil: Reading log file: 
asia/india/chennai/.382f84c4-b40b-4fd6-b026-55777091896d-0_20250320154031260.log.1_2-52-69,
 to build column range metadata.
25/03/20 15:44:05 WARN HoodieTableMetadataUtil: Reading log file: 
asia/india/chennai/.382f84c4-b40b-4fd6-b026-55777091896d-0_20250320154045224.log.1_2-89-118,
 to build column range metadata.
25/03/20 15:44:05 WARN MetricsConfig: Cannot locate configuration: tried 
hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
25/03/20 15:44:07 WARN BaseHoodieCompactionPlanGenerator: No operations are 
retrieved for file:/tmp/hudi_trips_mor for table file:///tmp/hudi_trips_mor
 {code}


Before upgrade, the table had pending compaction in data table. and metadata 
table just had a completed compaction. 




 

> With metadata table enabled, upgrade fails during rollback of a pending 
> compaction commit
> -----------------------------------------------------------------------------------------
>
>                 Key: HUDI-8933
>                 URL: https://issues.apache.org/jira/browse/HUDI-8933
>             Project: Apache Hudi
>          Issue Type: Sub-task
>            Reporter: Sagar Sumit
>            Priority: Critical
>             Fix For: 1.0.2
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> With metadata table enabled, Test failed with error related to compaction 
> instant during rollback # Write using 0.14.1 with metadata enabled.
>  # Upgrade 1.0 package, keep writer table version 6, but metadata still 
> enabled.
>  # Upgrade fails with error related to compaction instant during rollback
> Stacktrace:
> {code:java}
> org.apache.hudi.exception.HoodieRollbackException: Failed to rollback 
> /tmp/output/20241128/table_comp_test_0_14_1_1_0_0-SNAPSHOT_1732811718 commits 
> 20241128220754339  at 
> org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1065)
>   at 
> org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1012)
>   at 
> org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedWrites(BaseHoodieTableServiceClient.java:940)
>   at 
> org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedWrites(BaseHoodieTableServiceClient.java:922)
>   at 
> org.apache.hudi.client.BaseHoodieTableServiceClient.rollbackFailedWrites(BaseHoodieTableServiceClient.java:917)
>   at 
> org.apache.hudi.client.BaseHoodieWriteClient.lambda$startCommitWithTime$97cdbdca$1(BaseHoodieWriteClient.java:941)
>   at 
> org.apache.hudi.common.util.CleanerUtils.rollbackFailedWrites(CleanerUtils.java:222)
>   at 
> org.apache.hudi.client.BaseHoodieWriteClient.startCommitWithTime(BaseHoodieWriteClient.java:940)
>   at 
> org.apache.hudi.client.BaseHoodieWriteClient.startCommitWithTime(BaseHoodieWriteClient.java:933)
>   at 
> org.apache.hudi.HoodieSparkSqlWriterInternal.writeInternal(HoodieSparkSqlWriter.scala:501)
>   at 
> org.apache.hudi.HoodieSparkSqlWriterInternal.write(HoodieSparkSqlWriter.scala:204)
>   at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:121)  
> at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:150)  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:47)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
>   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
>   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118)
>   at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
>   at 
> org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)
>   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
>   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
>   at 
> org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
>   at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
>   at 
> org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
>   at 
> org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
>   at 
> org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
>   at 
> org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
>   at 
> org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
>   at 
> org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:133)
>   at 
> org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:856)  
> at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:387)
>   at 
> org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:360)  
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)  at 
> TestAutomationUtils$.loadData(../src/main/scala/com/hudi/spark/TestAutomationUtils.scala:54)
>   ... 53 elidedCaused by: org.apache.hudi.exception.HoodieMetadataException: 
> Commit being rolled back 20241128220754339 is earlier than the latest 
> compaction 20241128220832347. There are 0 deltacommits after this compaction: 
> []  at 
> org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.validateRollback(HoodieBackedTableMetadataWriter.java:1023)
>   at 
> org.apache.hudi.metadata.HoodieBackedTableMetadataWriter.update(HoodieBackedTableMetadataWriter.java:991)
>   at 
> org.apache.hudi.table.action.BaseActionExecutor.writeTableMetadata(BaseActionExecutor.java:105)
>   at 
> org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.finishRollback(BaseRollbackActionExecutor.java:255)
>   at 
> org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.runRollback(BaseRollbackActionExecutor.java:117)
>   at 
> org.apache.hudi.table.action.rollback.BaseRollbackActionExecutor.execute(BaseRollbackActionExecutor.java:138)
>   at 
> org.apache.hudi.table.HoodieSparkCopyOnWriteTable.rollback(HoodieSparkCopyOnWriteTable.java:298)
>   at 
> org.apache.hudi.client.BaseHoodieTableServiceClient.rollback(BaseHoodieTableServiceClient.java:1048)
>   ... 95 more {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to