the-other-tim-brown commented on code in PR #13098:
URL: https://github.com/apache/hudi/pull/13098#discussion_r2033299498
##########
hudi-common/src/main/java/org/apache/hudi/common/table/timeline/MetadataConversionUtils.java:
##########
@@ -338,6 +338,15 @@ public static HoodieArchivedMetaEntry
createMetaWrapperForEmptyInstant(HoodieIns
return archivedMetaWrapper;
}
+ private static <T extends HoodieCommitMetadata> Option<T>
getArchivedCommitMetadata(HoodieTableMetaClient metaClient, HoodieInstant
instant, Class<T> clazz) throws IOException {
+ T commitMetadata =
metaClient.getArchivedTimeline().readInstantContent(instant, clazz);
Review Comment:
Why does this re-fetch the data instead of using the `instantDetails` that
are already present above?
##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/io/TestHoodieTimelineArchiver.java:
##########
@@ -742,6 +745,37 @@ public void testReadArchivedCompactionPlan() throws
Exception {
}
}
+ @Test
+ public void testDowngradeArchivedTimeline() throws Exception {
+ HoodieWriteConfig writeConfig = initTestTableAndGetWriteConfig(true, 1, 2,
5, HoodieTableType.COPY_ON_WRITE);
+
+ // do ingestion and trigger archive actions here.
+ Map<String, Integer> cleanStats = new HashMap<>();
+ cleanStats.put("p1", 1);
+ cleanStats.put("p2", 2);
+ for (int i = 1; i < 11; i += 2) {
+ if (i == 3) {
+ testTable.doCluster(String.format("%08d", i), Collections.emptyMap(),
Arrays.asList("p1", "p2"), 20);
+ } else {
+ testTable.doWriteOperation(String.format("%08d", i),
WriteOperationType.UPSERT, i == 1 ? Arrays.asList("p1", "p2") :
Collections.emptyList(), Arrays.asList("p1", "p2"), 2);
+ testTable.doClean(String.format("%08d", i + 1), cleanStats,
Collections.emptyMap());
+ }
+ }
+ archiveAndGetCommitsList(writeConfig);
Review Comment:
Can you inspect the output of this method to confirm that at least one of
each commit type is written out to the archive timeline? Otherwise we may
inadvertently miss some cases.
##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/io/TestHoodieTimelineArchiver.java:
##########
@@ -742,6 +745,37 @@ public void testReadArchivedCompactionPlan() throws
Exception {
}
}
+ @Test
+ public void testDowngradeArchivedTimeline() throws Exception {
+ HoodieWriteConfig writeConfig = initTestTableAndGetWriteConfig(true, 1, 2,
5, HoodieTableType.COPY_ON_WRITE);
+
+ // do ingestion and trigger archive actions here.
+ Map<String, Integer> cleanStats = new HashMap<>();
+ cleanStats.put("p1", 1);
+ cleanStats.put("p2", 2);
+ for (int i = 1; i < 11; i += 2) {
+ if (i == 3) {
+ testTable.doCluster(String.format("%08d", i), Collections.emptyMap(),
Arrays.asList("p1", "p2"), 20);
+ } else {
+ testTable.doWriteOperation(String.format("%08d", i),
WriteOperationType.UPSERT, i == 1 ? Arrays.asList("p1", "p2") :
Collections.emptyList(), Arrays.asList("p1", "p2"), 2);
+ testTable.doClean(String.format("%08d", i + 1), cleanStats,
Collections.emptyMap());
+ }
Review Comment:
What about other commit types like compactions and rollbacks?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]