codope commented on code in PR #11124:
URL: https://github.com/apache/hudi/pull/11124#discussion_r1587021674
##########
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/utils/TestUtils.java:
##########
@@ -118,6 +119,10 @@ public static StreamReadMonitoringFunction
getMonitorFunc(Configuration conf) {
return new StreamReadMonitoringFunction(conf, new Path(basePath),
TestConfigurations.ROW_TYPE, 1024 * 1024L, null);
}
+ public static MockStreamingRuntimeContext getMockRuntimeContext() {
+ return new org.apache.hudi.sink.utils.MockStreamingRuntimeContext(false,
4, 0);
Review Comment:
Are hard-coded values passed to construct `MockStreamingRuntimeContext`
sufficient for all tests?
##########
hudi-common/src/test/java/org/apache/hudi/common/testutils/HoodieTestTable.java:
##########
@@ -289,6 +289,14 @@ public HoodieTestTable moveInflightCommitToComplete(String
instantTime, HoodieCo
return this;
}
+ public void moveCompleteCommitToInflight(String instantTime) throws
IOException {
Review Comment:
Is this method also used for MOR testing with compaction? In that case,
deleteCommit should be called instead of deleteDeltaCommit right.
##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/io/TestHoodieTimelineArchiver.java:
##########
@@ -663,7 +665,11 @@ public void testArchivalWithMultiWriters(boolean
enableMetadata) throws Exceptio
// do ingestion and trigger archive actions here.
for (int i = 1; i < 30; i++) {
- testTable.doWriteOperation("0000000" + String.format("%02d", i),
WriteOperationType.UPSERT, i == 1 ? Arrays.asList("p1", "p2") :
Collections.emptyList(), Arrays.asList("p1", "p2"), 2);
+ String instant = metaClient.createNewInstantTime();
+ if (i == 29) {
Review Comment:
maybe declare the number of rounds before this loop and use that variable?
##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/io/TestHoodieTimelineArchiver.java:
##########
@@ -1100,7 +1106,7 @@ public void testArchiveRollbacksAndCleanTestTable()
throws Exception {
testTable.doClean(cleanInstant, partitionToFileDeleteCount);
}
- for (int i = 5; i <= 13; i += 3) {
+ for (int i = 5; i <= 11; i += 2) {
Review Comment:
Can we add a comment above for why we need to jump in steps of 2?
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieTableServiceClient.java:
##########
@@ -798,7 +795,9 @@ protected void archive(HoodieTable table) {
}
try {
final Timer.Context timerContext = metrics.getArchiveCtx();
- // We cannot have unbounded commit files. Archive commits if we have to
archive
+ // We cannot have unbounded commit files. Archive commits if we have to
archive.
+ // Reload table timeline to reflect the latest commit.
+ table.getMetaClient().reloadActiveTimeline();
Review Comment:
If we reload timeline here, it is possible that write client and table
service client have different fs view. Is that expected? Also, why should we
not then reload timeline for other table services before executing that table
service?
##########
hudi-client/hudi-java-client/src/test/java/org/apache/hudi/client/TestJavaHoodieBackedMetadata.java:
##########
@@ -395,25 +397,27 @@ public void
testMetadataArchivalCleanConfig(HoodieTableType tableType) throws Ex
.build();
initWriteConfigAndMetatableWriter(writeConfig, true);
- AtomicInteger commitTime = new AtomicInteger(1);
// Trigger 4 regular writes in data table.
+ List<String> instants = new ArrayList<>();
for (int i = 1; i <= 4; i++) {
- doWriteOperation(testTable, "000000" + (commitTime.getAndIncrement()),
INSERT);
+ String instant = metaClient.createNewInstantTime();
Review Comment:
same here, maybe we can declar a variable numWrites = 4. Test is more
readable and easy to understand.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]