danny0405 commented on code in PR #12269:
URL: https://github.com/apache/hudi/pull/12269#discussion_r1845272992


##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -759,6 +769,78 @@ public static HoodieData<HoodieRecord> 
convertMetadataToColumnStatsRecords(Hoodi
         });
   }
 
+  static HoodieData<HoodieRecord> 
convertMetadataToRecordIndexRecords(HoodieEngineContext engineContext,
+                                                                      
HoodieCommitMetadata commitMetadata,
+                                                                      
HoodieMetadataConfig metadataConfig,
+                                                                      
HoodieTableMetaClient dataTableMetaClient,
+                                                                      int 
writesFileIdEncoding,
+                                                                      String 
instantTime) {
+
+    List<HoodieWriteStat> allWriteStats = 
commitMetadata.getPartitionToWriteStats().values().stream()
+        .flatMap(Collection::stream).collect(Collectors.toList());
+
+    if (allWriteStats.isEmpty()) {
+      return engineContext.emptyHoodieData();
+    }
+
+    try {
+      int parallelism = Math.max(Math.min(allWriteStats.size(), 
metadataConfig.getRecordIndexMaxParallelism()), 1);
+      String basePath = dataTableMetaClient.getBasePath().toString();
+      // we might need to set some additional variables if we need to process 
log files.
+      boolean anyLogFilesWithDeleteBlocks = 
allWriteStats.stream().anyMatch(writeStat -> {
+        String fileName = FSUtils.getFileName(writeStat.getPath(), 
writeStat.getPartitionPath());
+        return FSUtils.isLogFile(fileName) && writeStat.getNumInserts() == 0 
&& writeStat.getNumUpdateWrites() == 0 && writeStat.getNumDeletes() > 0;
+      });
+      Option<String> latestCommitTimestamp = Option.empty();
+      Option<Schema> writerSchemaOpt = Option.empty();
+      if (anyLogFilesWithDeleteBlocks) {
+        // if we have a log file w/ pure deletes.
+       latestCommitTimestamp = 
Option.of(dataTableMetaClient.getActiveTimeline().getCommitsTimeline().lastInstant().get().getCompletionTime());
+       writerSchemaOpt = tryResolveSchemaForTable(dataTableMetaClient);
+      }
+      int maxBufferSize = metadataConfig.getMaxReaderBufferSize();
+      StorageConfiguration storageConfiguration = 
dataTableMetaClient.getStorageConf();
+      Option<Schema> finalWriterSchemaOpt = writerSchemaOpt;
+      Option<String> finalLatestCommitTimestamp = latestCommitTimestamp;
+      HoodieData<HoodieRecord> recordIndexRecords = 
engineContext.parallelize(allWriteStats, parallelism)
+          .flatMap(writeStat -> {
+            HoodieStorage storage = HoodieStorageUtils.getStorage(new 
StoragePath(writeStat.getPath()), storageConfiguration);
+            // handle base files
+            if 
(writeStat.getPath().endsWith(HoodieFileFormat.PARQUET.getFileExtension())) {

Review Comment:
   So only writes with parquets could have RLI support, like BLOOM index or 
bucket index of COW.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to