kbuci commented on code in PR #10605:
URL: https://github.com/apache/hudi/pull/10605#discussion_r1475398790
##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/io/TestHoodieTimelineArchiver.java:
##########
@@ -1582,6 +1582,36 @@ public void
testPendingClusteringAfterArchiveCommit(boolean enableMetadata) thro
"Since we have a pending clustering instant at 00000002, we should
never archive any commit after 00000000");
}
+ @Test
+ public void testRetryArchivalAfterPreviousFailedDeletion() throws Exception {
+ HoodieWriteConfig writeConfig = initTestTableAndGetWriteConfig(true, 2, 4,
2);
+ for (int i = 0; i <= 5; i++) {
+ testTable.doWriteOperation("10" + i, WriteOperationType.UPSERT,
Arrays.asList("p1", "p2"), 1);
+ }
+ HoodieTable table = HoodieSparkTable.create(writeConfig, context,
metaClient);
+ HoodieTimelineArchiver archiver = new HoodieTimelineArchiver(writeConfig,
table);
+
+ HoodieTimeline timeline =
metaClient.getActiveTimeline().getWriteTimeline();
+ assertEquals(6, timeline.countInstants(), "Loaded 6 commits and the count
should match");
+ assertTrue(archiver.archiveIfRequired(context) > 0);
+ // Simulate archival failing to delete by re-adding the .commit instant
files
+ // (101.commit, 102.commit, and 103.commit instant files)
+ HoodieTestDataGenerator.createOnlyCompletedCommitFile(basePath,
"101_1001", wrapperFs.getConf());
Review Comment:
Unfortunately this is a bit of a hack. Ideally we would somehow induce a
failure during the DFS delete call in
`org.apache.hudi.client.timeline.HoodieTimelineArchiver#deleteArchivedInstants`
, but instead here we have to re-add some completed commit instant files.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]