ndimiduk commented on code in PR #6089:
URL: https://github.com/apache/hbase/pull/6089#discussion_r1740535510


##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalTableBackupClient.java:
##########
@@ -103,13 +103,14 @@ protected static int getIndex(TableName tbl, 
List<TableName> sTableList) {
 
   /*
    * Reads bulk load records from backup table, iterates through the records 
and forms the paths for
-   * bulk loaded hfiles. Copies the bulk loaded hfiles to backup destination
+   * bulk loaded hfiles. Copies the bulk loaded hfiles to backup destination. 
This method does NOT
+   * clean up the entries in the bulk load system table. Those entries should 
not be cleaned until
+   * the backup is marked as complete.
    * @param sTableList list of tables to be backed up
-   * @return map of table to List of files
+   * @return the rowkeys of bulk loaded files
    */
   @SuppressWarnings("unchecked")
-  protected Map<byte[], List<Path>>[] handleBulkLoad(List<TableName> 
sTableList)
-    throws IOException {
+  protected List<byte[]> handleBulkLoad(List<TableName> sTableList) throws 
IOException {

Review Comment:
   Is it worth having some backup consistency check that can detect and purge 
extra files? Or do we think that backups will cycle out and the redundancy will 
be dropped the next time a full backup is taken?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to