satishkotha commented on a change in pull request #3869:
URL: https://github.com/apache/hudi/pull/3869#discussion_r739605652



##########
File path: 
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/table/action/commit/BaseSparkCommitActionExecutor.java
##########
@@ -117,7 +119,24 @@ private void initKeyGenIfNeeded(boolean 
populateMetaFields) {
           
table.getFileSystemView().getFileGroupsInPendingClustering().map(entry -> 
entry.getKey()).collect(Collectors.toSet());
       UpdateStrategy updateStrategy = (UpdateStrategy)ReflectionUtils
           .loadClass(config.getClusteringUpdatesStrategyClass(), this.context, 
fileGroupsInPendingClustering);
-      return 
(JavaRDD<HoodieRecord<T>>)updateStrategy.handleUpdate(inputRecordsRDD);
+      Pair<JavaRDD<HoodieRecord<T>>, Set<HoodieFileGroupId>> 
recordsAndPendingClusteringFileGroups =
+          (Pair<JavaRDD<HoodieRecord<T>>, 
Set<HoodieFileGroupId>>)updateStrategy.handleUpdate(inputRecordsRDD);
+      Set<HoodieFileGroupId> fileGroupsWithUpdatesAndPendingClustering = 
recordsAndPendingClusteringFileGroups.getRight();
+      if (fileGroupsWithUpdatesAndPendingClustering.isEmpty()) {
+        return recordsAndPendingClusteringFileGroups.getLeft();
+      }
+      // there are filegroups pending clustering and receving updates, so 
rollback the inflight clustering instants
+      Set<HoodieInstant> pendingClusteringInstantsToRollback = 
getAllFileGroupsInPendingClusteringPlans(table.getMetaClient()).entrySet().stream()
+          .filter(e -> 
fileGroupsWithUpdatesAndPendingClustering.contains(e.getKey()))
+          .map(Map.Entry::getValue)
+          .collect(Collectors.toSet());
+      pendingClusteringInstantsToRollback.forEach(instant -> {
+        String commitTime = HoodieActiveTimeline.createNewInstantTime();
+        table.scheduleRollback(context, commitTime, instant, false);
+        table.rollback(context, commitTime, instant, false);

Review comment:
       how can transaction manager catch other changes to timeline. Based on my 
read, this alone doesnt seem sufficient because transaction would not fail even 
in clusetring completes while transaction is in progress. Maybe also need to 
start transaction as part of clustering. (I dont know exactly how this would 
work. You probably need to take suggestion from folks that worked on 
multi-writer: Nishith/Jagmeet Bali/Vinoth). 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to