nsivabalan commented on code in PR #7469:
URL: https://github.com/apache/hudi/pull/7469#discussion_r1182035004


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieWriteClient.java:
##########
@@ -897,28 +897,40 @@ public HoodieCleanMetadata clean(String cleanInstantTime, 
boolean scheduleInline
     if (!tableServicesEnabled(config)) {
       return null;
     }
-    final Timer.Context timerContext = metrics.getCleanCtx();
-    CleanerUtils.rollbackFailedWrites(config.getFailedWritesCleanPolicy(),
-        HoodieTimeline.CLEAN_ACTION, () -> rollbackFailedWrites(skipLocking));
-
-    HoodieTable table = createTable(config, hadoopConf);
-    if (config.allowMultipleCleans() || 
!table.getActiveTimeline().getCleanerTimeline().filterInflightsAndRequested().firstInstant().isPresent())
 {
-      LOG.info("Cleaner started");
-      // proceed only if multiple clean schedules are enabled or if there are 
no pending cleans.
-      if (scheduleInline) {
-        scheduleTableServiceInternal(cleanInstantTime, Option.empty(), 
TableServiceType.CLEAN);
-        table.getMetaClient().reloadActiveTimeline();
+    HoodieCleanMetadata metadata;
+    HoodieInstant ownerInstant = null;
+    try {
+      if (!skipLocking) {
+        ownerInstant = new HoodieInstant(true, HoodieTimeline.CLEAN_ACTION, 
cleanInstantTime);
+        this.txnManager.beginTransaction(Option.of(ownerInstant), 
Option.empty());
+      }
+      final Timer.Context timerContext = metrics.getCleanCtx();
+      CleanerUtils.rollbackFailedWrites(config.getFailedWritesCleanPolicy(),

Review Comment:
   even if we decide to go w/ locking, we should try and keep the locking to 
the mandatory critical section. for eg, we should lock only the 
rollbackFailedwrites and not the entire clean operation. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to