the-other-tim-brown commented on code in PR #13313:
URL: https://github.com/apache/hudi/pull/13313#discussion_r2101326672


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/index/HoodieIndexUtils.java:
##########
@@ -251,15 +257,52 @@ public static HoodieIndex 
createUserDefinedIndex(HoodieWriteConfig config) {
    */
   private static <R> HoodieData<HoodieRecord<R>> getExistingRecords(
       HoodieData<Pair<String, String>> partitionLocations, HoodieWriteConfig 
config, HoodieTable hoodieTable) {
-    final Option<String> instantTime = hoodieTable
-        .getMetaClient()
+    HoodieTableMetaClient metaClient = hoodieTable.getMetaClient();
+    final Option<String> instantTime = metaClient
         .getActiveTimeline() // we need to include all actions and completed
         .filterCompletedInstants()
         .lastInstant()
         .map(HoodieInstant::requestedTime);
-    return partitionLocations.flatMap(p
-        -> new HoodieMergedReadHandle(config, instantTime, hoodieTable, 
Pair.of(p.getKey(), p.getValue()))
-        .getMergedRecords().iterator());
+    ReaderContextFactory<R> readerContextFactory = 
hoodieTable.getContext().getReaderContextFactory(metaClient);
+    if (instantTime.isEmpty()) {
+      return hoodieTable.getContext().emptyHoodieData();
+    }
+    return partitionLocations.flatMap(p -> {
+      Option<FileSlice> fileSliceOption = Option.fromJavaOptional(hoodieTable
+          .getHoodieView()
+          .getLatestMergedFileSlicesBeforeOrOn(p.getLeft(), instantTime.get())
+          .filter(fileSlice -> fileSlice.getFileId().equals(p.getRight()))
+          .findFirst());
+      if (fileSliceOption.isEmpty()) {
+        return Collections.emptyIterator();
+      }
+      Schema dataSchema = HoodieAvroUtils.addMetadataFields(new 
Schema.Parser().parse(config.getWriteSchema()), 
config.allowOperationMetadataField());
+      Option<InternalSchema> internalSchemaOption = 
SerDeHelper.fromJson(config.getInternalSchema());
+      FileSlice fileSlice = fileSliceOption.get();
+      HoodieReaderContext<R> readerContext = readerContextFactory.getContext();
+      HoodieFileGroupReader<R> fileGroupReader = 
HoodieFileGroupReader.<R>newBuilder()
+          .withReaderContext(readerContext)
+          .withHoodieTableMetaClient(metaClient)
+          .withLatestCommitTime(instantTime.get())
+          .withFileSlice(fileSlice)
+          .withDataSchema(dataSchema)
+          .withRequestedSchema(dataSchema)
+          .withInternalSchema(internalSchemaOption)
+          .withShouldUseRecordPosition(false)
+          .withProps(metaClient.getTableConfig().getProps())
+          .build();
+      try {
+        final HoodieRecordLocation currentLocation = new 
HoodieRecordLocation(fileSlice.getBaseInstantTime(), fileSlice.getFileId());
+        return new 
CloseableMappingIterator<>(fileGroupReader.getClosableHoodieRecordIterator(), 
hoodieRecord -> {
+          hoodieRecord.unseal();
+          hoodieRecord.setCurrentLocation(currentLocation);
+          hoodieRecord.seal();

Review Comment:
   I think we would still need the unseal and seal steps if I am not mistaken. 
If other parts of the code need HoodieRecords but do not require the location, 
then this could be extra work that those parts of the code incur if we move 
this to the FGReader code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to