n3nash commented on code in PR #5535:
URL: https://github.com/apache/hudi/pull/5535#discussion_r870883988


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieWriteClient.java:
##########
@@ -1558,13 +1558,15 @@ private void tryUpgrade(HoodieTableMetaClient 
metaClient, Option<String> instant
         new UpgradeDowngrade(metaClient, config, context, 
upgradeDowngradeHelper);
 
     if 
(upgradeDowngrade.needsUpgradeOrDowngrade(HoodieTableVersion.current())) {
-      // Ensure no inflight commits by setting EAGER policy and explicitly 
cleaning all failed commits
-      List<String> instantsToRollback = getInstantsToRollback(metaClient, 
HoodieFailedWritesCleaningPolicy.EAGER, instantTime);
+      if 
(config.getWriteConcurrencyMode().supportsOptimisticConcurrencyControl()) {
+        // Ensure no inflight commits by setting EAGER policy and explicitly 
cleaning all failed commits
+        List<String> instantsToRollback = getInstantsToRollback(metaClient, 
HoodieFailedWritesCleaningPolicy.EAGER, instantTime);
 
-      Map<String, Option<HoodiePendingRollbackInfo>> pendingRollbacks = 
getPendingRollbackInfos(metaClient);
-      instantsToRollback.forEach(entry -> pendingRollbacks.putIfAbsent(entry, 
Option.empty()));
+        Map<String, Option<HoodiePendingRollbackInfo>> pendingRollbacks = 
getPendingRollbackInfos(metaClient);
+        instantsToRollback.forEach(entry -> 
pendingRollbacks.putIfAbsent(entry, Option.empty()));

Review Comment:
   One case that I can think of is when the CLEAN_FAILED_FILES policy is set to 
LAZY and writer config is set to a single writer - I believe there can be older 
inflight files lying around. In this case if the rollback is done only for 
Optimistic Concurrency, some older inflight metadata files may be lying around. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to