Davis-Zhang-Onehouse commented on code in PR #13489:
URL: https://github.com/apache/hudi/pull/13489#discussion_r2219926663


##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadata.java:
##########
@@ -162,8 +177,11 @@ private void initIfNeeded() {
 
   @Override
   protected Option<HoodieRecord<HoodieMetadataPayload>> getRecordByKey(String 
key, String partitionName) {
-    Map<String, HoodieRecord<HoodieMetadataPayload>> recordsByKeys = 
getRecordsByKeys(Collections.singletonList(key), partitionName);
-    return Option.ofNullable(recordsByKeys.get(key));
+    List<HoodieRecord<HoodieMetadataPayload>> records = getRecordsByKeys(
+        HoodieListData.eager(Collections.singletonList(key)), partitionName, 
Option.empty())

Review Comment:
   this is because we lack an interface design over index lookup path.
   Different index do encode, sort and lookup in the same pattern yet different 
implementations, we need to address them when we work on the uniform interface.
   
   The extra parameter is added to getRecordsByKeys as this is where later 
other index like col stats, files index will put their encoding function in, 
instead of in their individual upstream customized logic.
   
   Regarding non trivial index code interface revision, after I get index join 
in a good shape need to work with Ethan on that per previous discussion.



##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadata.java:
##########
@@ -219,42 +243,14 @@ public HoodieData<HoodieRecord<HoodieMetadataPayload>> 
getRecordsByKeyPrefixes(L
     return (shouldLoadInMemory ? HoodieListData.lazy(partitionFileSlices) :
         getEngineContext().parallelize(partitionFileSlices))
         .flatMap(
-            (SerializableFunction<FileSlice, 
Iterator<HoodieRecord<HoodieMetadataPayload>>>) fileSlice -> {
-              return getByKeyPrefixes(fileSlice, sortedKeyPrefixes, 
partitionName);
-            });
+            (SerializableFunction<FileSlice, 
Iterator<HoodieRecord<HoodieMetadataPayload>>>) fileSlice ->
+                getByKeyPrefixes(fileSlice, sortedKeyPrefixes, partitionName));
   }
 
   private Iterator<HoodieRecord<HoodieMetadataPayload>> 
getByKeyPrefixes(FileSlice fileSlice,
-                                                                         
List<String> sortedKeyPrefixes,
+                                                                         
List<String> sortedEncodedKeyPrefixes,
                                                                          
String partitionName) throws IOException {
-    Option<HoodieInstant> latestMetadataInstant =
-        
metadataMetaClient.getActiveTimeline().filterCompletedInstants().lastInstant();
-    String latestMetadataInstantTime =
-        
latestMetadataInstant.map(HoodieInstant::requestedTime).orElse(SOLO_COMMIT_TIMESTAMP);
-    Schema schema = 
HoodieAvroUtils.addMetadataFields(HoodieMetadataRecord.getClassSchema());
-    // Only those log files which have a corresponding completed instant on 
the dataset should be read
-    // This is because the metadata table is updated before the dataset 
instants are committed.
-    Set<String> validInstantTimestamps = getValidInstantTimestamps();
-    InstantRange instantRange = InstantRange.builder()
-        .rangeType(InstantRange.RangeType.EXACT_MATCH)
-        .explicitInstants(validInstantTimestamps).build();
-    HoodieReaderContext<IndexedRecord> readerContext = new 
HoodieAvroReaderContext(
-        storageConf,
-        metadataMetaClient.getTableConfig(),
-        Option.of(instantRange),
-        Option.of(transformKeyPrefixesToPredicate(sortedKeyPrefixes)));
-    HoodieFileGroupReader<IndexedRecord> fileGroupReader = 
HoodieFileGroupReader.<IndexedRecord>newBuilder()
-        .withReaderContext(readerContext)
-        .withHoodieTableMetaClient(metadataMetaClient)
-        .withLatestCommitTime(latestMetadataInstantTime)
-        .withFileSlice(fileSlice)
-        .withDataSchema(schema)
-        .withRequestedSchema(schema)
-        .withProps(buildFileGroupReaderProperties(metadataConfig))
-        .withStart(0)
-        .withLength(Long.MAX_VALUE)
-        .withShouldUseRecordPosition(false)
-        .build();
+    HoodieFileGroupReader<IndexedRecord> fileGroupReader = 
buildFileGroupReader(sortedEncodedKeyPrefixes, fileSlice, false);

Review Comment:
   :D



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to