danny0405 commented on code in PR #9058:
URL: https://github.com/apache/hudi/pull/9058#discussion_r1243487643
##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedMetadata.java:
##########
@@ -3002,6 +3004,99 @@ public void testOutOfOrderCommits() throws Exception {
validateMetadata(client);
}
+ @Test
+ public void testDeleteWithRecordIndex() throws Exception {
+ init(HoodieTableType.COPY_ON_WRITE, true);
+ HoodieSparkEngineContext engineContext = new HoodieSparkEngineContext(jsc);
+ HoodieWriteConfig writeConfig = getWriteConfigBuilder(true, true, false)
+
.withMetadataConfig(HoodieMetadataConfig.newBuilder().withEnableRecordIndex(true).withMaxNumDeltaCommitsBeforeCompaction(1).build())
+ // In memory index is required for the writestatus to track the
written records
+
.withIndexConfig(HoodieIndexConfig.newBuilder().withIndexType(HoodieIndex.IndexType.INMEMORY).build())
Review Comment:
These 2 options to enable the RLI looks weird, for Spark engine the default
index type is `SIMPLE`, that means in order to enable RLI, people always needs
to configure these two, shouldn't RLI be a new index type then?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]