nsivabalan commented on code in PR #8837:
URL: https://github.com/apache/hudi/pull/8837#discussion_r1268077898


##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -653,37 +608,57 @@ private static void 
processRestoreMetadata(HoodieActiveTimeline metadataTableTim
    */
   public static Map<MetadataPartitionType, HoodieData<HoodieRecord>> 
convertMetadataToRecords(
       HoodieEngineContext engineContext, HoodieActiveTimeline 
metadataTableTimeline,
-      HoodieRollbackMetadata rollbackMetadata, MetadataRecordsGenerationParams 
recordsGenerationParams,
-      String instantTime, Option<String> lastSyncTs, boolean wasSynced) {
+      HoodieTableMetaClient dataTableMetaClient,
+      HoodieRollbackMetadata rollbackMetadata, String instantTime) {
     final Map<MetadataPartitionType, HoodieData<HoodieRecord>> 
partitionToRecordsMap = new HashMap<>();
-    Map<String, List<String>> partitionToDeletedFiles = new HashMap<>();
-    Map<String, Map<String, Long>> partitionToAppendedFiles = new HashMap<>();
 
-    List<HoodieRecord> filesPartitionRecords =
-        convertMetadataToRollbackRecords(metadataTableTimeline, 
rollbackMetadata, partitionToDeletedFiles, partitionToAppendedFiles, 
instantTime, lastSyncTs, wasSynced);
+    List<HoodieRecord> filesPartitionRecords = 
convertMetadataToRollbackRecords(rollbackMetadata, instantTime);
+
+    List<HoodieRecord> reAddedRecords = 
getHoodieRecordsForLogFilesFromRollbackPlan(dataTableMetaClient, instantTime);
+    filesPartitionRecords.addAll(reAddedRecords);
     final HoodieData<HoodieRecord> rollbackRecordsRDD = 
engineContext.parallelize(filesPartitionRecords, 1);
+
     partitionToRecordsMap.put(MetadataPartitionType.FILES, rollbackRecordsRDD);
 
     return partitionToRecordsMap;
   }
 
+  private static List<HoodieRecord> 
getHoodieRecordsForLogFilesFromRollbackPlan(HoodieTableMetaClient 
dataTableMetaClient, String instantTime) {
+    /*List<HoodieInstant> instants = 
dataTableMetaClient.reloadActiveTimeline().filterRequestedRollbackTimeline()
+        .filter(instant -> instant.getTimestamp().equals(instantTime) && 
instant.isRequested()).getInstants();*/
+
+    HoodieInstant rollbackInstant = new 
HoodieInstant(HoodieInstant.State.REQUESTED, HoodieTimeline.ROLLBACK_ACTION, 
instantTime);
+
+    // HoodieInstant rollbackInstant = instants.get(0);
+    HoodieInstant requested = 
HoodieTimeline.getRollbackRequestedInstant(rollbackInstant);
+    try {
+      HoodieRollbackPlan rollbackPlan = 
TimelineMetadataUtils.deserializeAvroMetadata(
+          
dataTableMetaClient.getActiveTimeline().readRollbackInfoAsBytes(requested).get(),
 HoodieRollbackPlan.class);
+
+      Map<String, Map<String, Long>> partitionToLogFilesMap = new HashMap<>();
+
+      rollbackPlan.getRollbackRequests().forEach(rollbackRequest -> {
+        
partitionToLogFilesMap.computeIfAbsent(rollbackRequest.getPartitionPath(), s -> 
new HashMap<>());
+        // fetch only log files that are expected to be RB'd in DT as part of 
this rollback. these log files will not be deleted, but rendered
+        // invalid once rollback is complete.
+        
partitionToLogFilesMap.get(rollbackRequest.getPartitionPath()).putAll(rollbackRequest.getLogBlocksToBeDeleted());

Review Comment:
   there are chance we will have more than 1 entry for a given partition in 
rollbackPlan.getRollbackRequests(). I guess, the optimization you are 
suggesting based on the assumption that we will call put to partitionToFilesMap 
only once per partition. 
   So, I don't see a real benefit here. please do let me know wdyt. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to