ndimiduk commented on code in PR #5876:
URL: https://github.com/apache/hbase/pull/5876#discussion_r1596546774


##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupAdminImpl.java:
##########
@@ -129,20 +128,15 @@ public int deleteBackups(String[] backupIds) throws 
IOException {
       }
       snapshotDone = true;
       try {
+        List<String> affectedBackupRootDirs = new ArrayList<>();
         for (int i = 0; i < backupIds.length; i++) {
           BackupInfo info = sysTable.readBackupInfo(backupIds[i]);
           if (info != null) {

Review Comment:
   nit: invert the logic and reduce indentation depth.



##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupAdminImpl.java:
##########
@@ -129,20 +128,15 @@ public int deleteBackups(String[] backupIds) throws 
IOException {
       }
       snapshotDone = true;

Review Comment:
   That snapshot logic looks problematic. Is it okay to continue using an 
existing snapshot? If an existing snapshot exists then we don't know what it 
contains and this operation should fail. Separate issue I guess?



##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupAdminImpl.java:
##########
@@ -175,26 +169,23 @@ public int deleteBackups(String[] backupIds) throws 
IOException {
 
   /**
    * Updates incremental backup set for every backupRoot
-   * @param tablesMap map [backupRoot: {@code Set<TableName>}]
-   * @param table     backup system table
+   * @param backupRoots backupRoots for which to revise the incremental backup 
set
+   * @param table       backup system table
    * @throws IOException if a table operation fails
    */
-  private void finalizeDelete(Map<String, HashSet<TableName>> tablesMap, 
BackupSystemTable table)
+  private void finalizeDelete(List<String> backupRoots, BackupSystemTable 
table)
     throws IOException {
-    for (String backupRoot : tablesMap.keySet()) {
+    for (String backupRoot : backupRoots) {
       Set<TableName> incrTableSet = 
table.getIncrementalBackupTableSet(backupRoot);
-      Map<TableName, ArrayList<BackupInfo>> tableMap =
+      Map<TableName, List<BackupInfo>> tableMap =
         table.getBackupHistoryForTableSet(incrTableSet, backupRoot);
-      for (Map.Entry<TableName, ArrayList<BackupInfo>> entry : 
tableMap.entrySet()) {
-        if (entry.getValue() == null) {
-          // No more backups for a table
-          incrTableSet.remove(entry.getKey());
-        }
-      }
+
+      // Keep only the tables that are present in other backups
+      incrTableSet.retainAll(tableMap.keySet());
+
+      table.deleteIncrementalBackupTableSet(backupRoot);

Review Comment:
   This API on `BackupSystemTable` is pretty ham-handed. Would be better if we 
could delete on the table entries that are being removed, rather than deleting 
and re-creating the row. I assume the lock taken via 
`startBackupExclusiveOperation()` at the beginning of `deleteBackups` prevents 
another actor from modifying that row concurrently...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to