[
https://issues.apache.org/jira/browse/HBASE-29891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18058019#comment-18058019
]
Kevin Geiszler commented on HBASE-29891:
----------------------------------------
Before the first incremental backup (and after the continuous full backup) the
backup root directory looks as follows:
{code:java}
% hadoop fs -ls hdfs://localhost:51902/backupUT
Found 2 items
drwxr-xr-x - kgeiszler supergroup 0 2026-02-11 18:20
hdfs://localhost:51902/backupUT/backup_1770862784248
drwxr-xr-x - kgeiszler supergroup 0 2026-02-11 18:20
hdfs://localhost:51902/backupUT/backup_1770862810215 {code}
After {{{}IncrementalTableBackupClient.walToHFiles(){}}}, the root directory
has the {{.tmp}} directory, which is the map-reduce output directory:
{code:java}
% hadoop fs -ls hdfs://localhost:51902/backupUT
Found 3 items
drwxr-xr-x - kgeiszler supergroup 0 2026-02-11 18:23
hdfs://localhost:51902/backupUT/.tmp
drwxr-xr-x - kgeiszler supergroup 0 2026-02-11 18:20
hdfs://localhost:51902/backupUT/backup_1770862784248
drwxr-xr-x - kgeiszler supergroup 0 2026-02-11 18:20
hdfs://localhost:51902/backupUT/backup_1770862810215 {code}
Within {{{}.tmp{}}}, we see the incremental backup directory:
{code:java}
% hadoop fs -ls hdfs://localhost:51902/backupUT/.tmp
Found 1 items
drwxr-xr-x - kgeiszler supergroup 0 2026-02-11 18:23
hdfs://localhost:51902/backupUT/.tmp/backup_1770862810215 {code}
As we let {{convertWALsToHFiles()}} continue to run and execute
{{walToHFiles()}} again on the next table, we get the error:
{code:java}
java.io.IOException: org.apache.hadoop.mapred.FileAlreadyExistsException:
Output directory hdfs://localhost:51902/backupUT/.tmp/backup_1770862810215
already exists {code}
> Multi-table continuous incremental backup is failing because output directory
> already exists
> --------------------------------------------------------------------------------------------
>
> Key: HBASE-29891
> URL: https://issues.apache.org/jira/browse/HBASE-29891
> Project: HBase
> Issue Type: Bug
> Components: backup&restore
> Affects Versions: 2.6.0, 3.0.0-alpha-4
> Reporter: Kevin Geiszler
> Assignee: Kevin Geiszler
> Priority: Major
> Attachments: full-log.txt, unit-test.txt
>
>
> This was discovered while writing a Point-in-Time Restore integration test
> for HBASE-28957.
> Running an incremental backup with continuous backup enabled on multiple
> tables results in the following error:
> {noformat}
> Output directory hdfs://localhost:64120/backupUT/.tmp/backup_1770846846624
> already exists{noformat}
> Here is the full error and stack trace:
> {code:java}
> 2026-02-11T13:54:17,945 ERROR [Time-limited test {}]
> impl.TableBackupClient(232): Unexpected exception in incremental-backup:
> incremental copy backup_1770846846624Output directory
> hdfs://localhost:64120/backupUT/.tmp/backup_1770846846624 already exists
> org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory
> hdfs://localhost:64120/backupUT/.tmp/backup_1770846846624 already exists
> at
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:164)
> ~[hadoop-mapreduce-client-core-3.4.2.jar:?]
> at
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:278)
> ~[hadoop-mapreduce-client-core-3.4.2.jar:?]
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142)
> ~[hadoop-mapreduce-client-core-3.4.2.jar:?]
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1677)
> ~[hadoop-mapreduce-client-core-3.4.2.jar:?]
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1674)
> ~[hadoop-mapreduce-client-core-3.4.2.jar:?]
> at java.security.AccessController.doPrivileged(AccessController.java:712)
> ~[?:?]
> at javax.security.auth.Subject.doAs(Subject.java:439) ~[?:?]
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
> ~[hadoop-common-3.4.2.jar:?]
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1674)
> ~[hadoop-mapreduce-client-core-3.4.2.jar:?]
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1695)
> ~[hadoop-mapreduce-client-core-3.4.2.jar:?]
> at org.apache.hadoop.hbase.mapreduce.WALPlayer.run(WALPlayer.java:482)
> ~[classes/:?]
> at
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.walToHFiles(IncrementalTableBackupClient.java:545)
> ~[classes/:?]
> at
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.convertWALsToHFiles(IncrementalTableBackupClient.java:488)
> ~[classes/:?]
> at
> org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.execute(IncrementalTableBackupClient.java:363)
> ~[classes/:?]
> at
> org.apache.hadoop.hbase.backup.impl.BackupAdminImpl.backupTables(BackupAdminImpl.java:681)
> ~[classes/:?]
> at
> org.apache.hadoop.hbase.backup.TestBackupBase.backupTables(TestBackupBase.java:445)
> ~[test-classes/:?]
> at
> org.apache.hadoop.hbase.backup.TestIncrementalBackupWithContinuous.testMultiTableContinuousBackupWithIncrementalBackupSuccess(TestIncrementalBackupWithContinuous.java:192)
> ~[test-classes/:?]
> at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[?:?]
> at
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
> ~[?:?]
> at
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[?:?]
> at java.lang.reflect.Method.invoke(Method.java:569) ~[?:?]
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> ~[junit-4.13.2.jar:4.13.2]
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> ~[junit-4.13.2.jar:4.13.2]
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> ~[junit-4.13.2.jar:4.13.2]
> at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> ~[junit-4.13.2.jar:4.13.2]
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> ~[junit-4.13.2.jar:4.13.2]
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> ~[junit-4.13.2.jar:4.13.2]
> at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> ~[junit-4.13.2.jar:4.13.2]
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> ~[junit-4.13.2.jar:4.13.2]
> at
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> ~[junit-4.13.2.jar:4.13.2]
> at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) ~[?:?]
> at java.util.concurrent.FutureTask.run(FutureTask.java) ~[?:?]
> at java.lang.Thread.run(Thread.java:840) ~[?:?] {code}
> A full log of a unit test run that reproduces the error has been attached to
> this ticket.
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)