kgeisz commented on code in PR #7664:
URL: https://github.com/apache/hbase/pull/7664#discussion_r2722331403
##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalTableBackupClient.java:
##########
@@ -403,13 +403,17 @@ public void execute() throws IOException,
ColumnFamilyMismatchException {
failBackup(conn, backupInfo, backupManager, e, "Unexpected Exception : ",
BackupType.INCREMENTAL, conf);
throw new IOException(e);
+ } finally {
+ if (backupInfo.isContinuousBackupEnabled()) {
+ deleteBulkLoadDirectory();
+ }
Review Comment:
As you pointed out, `incrementalCopyHFiles()` will delete the directory.
However, the directory will be created again for continuous backups when
`handleBulkLoad()` is run. There is logic in `handleBulkload()` that runs
`BackupUtils.collectBulkFiles()` for continuous backups. When you go further
down the method call chain, you see this calls
`BulkFilesCollector.collectFromWalDirs()`, and eventually the
`BulkLoadCollectorJob` is run. This job sets the output path
[here](https://github.com/apache/hbase/blob/HBASE-28957_rebased/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/mapreduce/BulkLoadCollectorJob.java#L281).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]