Copilot commented on code in PR #7664:
URL: https://github.com/apache/hbase/pull/7664#discussion_r2719119895
##########
hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/IncrementalTableBackupClient.java:
##########
@@ -403,13 +403,17 @@ public void execute() throws IOException,
ColumnFamilyMismatchException {
failBackup(conn, backupInfo, backupManager, e, "Unexpected Exception : ",
BackupType.INCREMENTAL, conf);
throw new IOException(e);
+ } finally {
+ if (backupInfo.isContinuousBackupEnabled()) {
+ deleteBulkLoadDirectory();
+ }
Review Comment:
`deleteBulkLoadDirectory()` is now invoked both in `incrementalCopyHFiles`'s
`finally` block and here in `execute()` when continuous backup is enabled,
which means continuous incremental backups will attempt to delete the same bulk
load directory twice. On the second call, `FileSystem.delete` will typically
return `false` because the directory has already been removed, causing a
misleading WARN log (`"Could not delete ..."`) on every successful run and
making operational diagnostics harder. Consider making the cleanup idempotent
without emitting a warning when the directory is already gone (for example, by
checking existence before deleting or relaxing the log level), or centralizing
the cleanup in a single place so the directory is only deleted once per backup
run.
```suggestion
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]