the-other-tim-brown commented on code in PR #13383:
URL: https://github.com/apache/hudi/pull/13383#discussion_r2128919371


##########
hudi-spark-datasource/hudi-spark/src/test/java/org/apache/hudi/client/functional/TestMetadataUtilRLIandSIRecordGeneration.java:
##########
@@ -701,4 +709,36 @@ private void 
parseRecordKeysFromBaseFiles(List<WriteStatus> writeStatuses, Map<S
       }
     });
   }
+
+  Set<String> getRecordKeys(String partition, String baseInstantTime, String 
fileId, List<StoragePath> logFilePaths, HoodieTableMetaClient datasetMetaClient,
+                                   Option<Schema> writerSchemaOpt, String 
latestCommitTimestamp) throws IOException {
+    if (writerSchemaOpt.isPresent()) {
+      // read log file records without merging
+      FileSlice fileSlice = new FileSlice(partition, baseInstantTime, fileId);
+      logFilePaths.forEach(logFilePath -> {
+        HoodieLogFile logFile = new HoodieLogFile(logFilePath);
+        fileSlice.addLogFile(logFile);
+      });
+      TypedProperties properties = new TypedProperties();
+      // configure un-merged log file reader
+      properties.setProperty(HoodieReaderConfig.MERGE_TYPE.key(), 
REALTIME_SKIP_MERGE);

Review Comment:
   Let me try this out, if it works I'll remove those changes to keep the scope 
of this PR as lean as possible



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to