rmdmattingly commented on code in PR #6040:
URL: https://github.com/apache/hbase/pull/6040#discussion_r1768881096


##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/master/BackupLogCleaner.java:
##########
@@ -81,39 +81,55 @@ public void init(Map<String, Object> params) {
     }
   }
 
-  private Map<Address, Long> getServerToNewestBackupTs(List<BackupInfo> 
backups)
+  /**
+   * Calculates the timestamp boundary up to which all backup roots have 
already included the WAL.
+   * I.e. WALs with a lower (= older) or equal timestamp are no longer needed 
for future incremental
+   * backups.
+   */
+  private Map<Address, Long> serverToPreservationBoundaryTs(List<BackupInfo> 
backups)

Review Comment:
   Sorry for being a pain here, but I'm not sure I agree. When building 
`serverToPreservationBoundaryTs` we loop through:
   ```
   for (BackupInfo backupInfo : newestBackupPerRootDir.values()) {
     // build boundaries via tableSetTimestampMap, like you said
   }
   ```
   and we agree that `newestBackupPerRootDir.values()` will only contain B4. So 
our boundaries would end up only being based on the timestamps from B4? How 
does the second newest backup, and beyond, in the root come into play?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to