nsivabalan commented on code in PR #12050:
URL: https://github.com/apache/hudi/pull/12050#discussion_r1799915139


##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -2054,48 +2080,53 @@ private static Stream<HoodieRecord> 
collectAndProcessColumnMetadata(
             .map(entry -> 
FileFormatUtils.getColumnRangeInPartition(partitionPath, entry.getValue()));
 
     // Create Partition Stats Records
-    return HoodieMetadataPayload.createPartitionStatsRecords(partitionPath, 
partitionStatsRangeMetadata.collect(Collectors.toList()), false);
+    return HoodieMetadataPayload.createPartitionStatsRecords(partitionPath, 
partitionStatsRangeMetadata.collect(Collectors.toList()), false, isTightBound);
   }
 
   public static HoodieData<HoodieRecord> 
convertFilesToPartitionStatsRecords(HoodieEngineContext engineContext,
                                                                              
List<DirectoryInfo> partitionInfoList,
                                                                              
HoodieMetadataConfig metadataConfig,
-                                                                             
HoodieTableMetaClient dataTableMetaClient) {
+                                                                             
HoodieTableMetaClient dataTableMetaClient,
+                                                                             
Option<Schema> writerSchemaOpt) {
+    Lazy<Option<Schema>> lazyWriterSchemaOpt = writerSchemaOpt.isPresent() ? 
Lazy.eagerly(writerSchemaOpt) : Lazy.lazily(() -> 
tryResolveSchemaForTable(dataTableMetaClient));
     final List<String> columnsToIndex = getColumnsToIndex(
         metadataConfig.isPartitionStatsIndexEnabled(),
         metadataConfig.getColumnsEnabledForColumnStatsIndex(),
-        Lazy.lazily(() -> tryResolveSchemaForTable(dataTableMetaClient)));
+        lazyWriterSchemaOpt);
     if (columnsToIndex.isEmpty()) {
       LOG.warn("No columns to index for partition stats index");
       return engineContext.emptyHoodieData();
     }
     LOG.debug("Indexing following columns for partition stats index: {}", 
columnsToIndex);
     // Create records for MDT
     int parallelism = Math.max(Math.min(partitionInfoList.size(), 
metadataConfig.getPartitionStatsIndexParallelism()), 1);
+    Option<Schema> writerSchema = lazyWriterSchemaOpt.get();
     return engineContext.parallelize(partitionInfoList, 
parallelism).flatMap(partitionInfo -> {
       final String partitionPath = partitionInfo.getRelativePath();
       // Step 1: Collect Column Metadata for Each File
       List<List<HoodieColumnRangeMetadata<Comparable>>> fileColumnMetadata = 
partitionInfo.getFileNameToSizeMap().keySet().stream()
-              .map(fileName -> getFileStatsRangeMetadata(partitionPath, 
partitionPath + "/" + fileName, dataTableMetaClient, columnsToIndex, false))
-              .collect(Collectors.toList());
+          .map(fileName -> getFileStatsRangeMetadata(partitionPath, 
partitionPath + "/" + fileName, dataTableMetaClient, columnsToIndex, false, 
true, writerSchemaOpt))

Review Comment:
   lets add little more docs around this



##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -1215,20 +1204,25 @@ private static Stream<HoodieRecord> 
getColumnStatsRecords(String partitionPath,
     }
 
     List<HoodieColumnRangeMetadata<Comparable>> columnRangeMetadata =
-        readColumnRangeMetadataFrom(filePartitionPath, datasetMetaClient, 
columnsToIndex);
+        readColumnRangeMetadataFrom(filePartitionPath, datasetMetaClient, 
columnsToIndex, false, Option.empty());

Review Comment:
   oh, looks like we are ignoring log files during bootstrap is it? 
   ```
   HoodieTableMetadataUtil.convertFilesToColumnStatsRecords
   ```
   only does it for parquet files? 



##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -1215,20 +1204,25 @@ private static Stream<HoodieRecord> 
getColumnStatsRecords(String partitionPath,
     }
 
     List<HoodieColumnRangeMetadata<Comparable>> columnRangeMetadata =
-        readColumnRangeMetadataFrom(filePartitionPath, datasetMetaClient, 
columnsToIndex);
+        readColumnRangeMetadataFrom(filePartitionPath, datasetMetaClient, 
columnsToIndex, false, Option.empty());

Review Comment:
   how we can set this to Option.empty. 
   we might need to compute stats for log files too right? 
   



##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -1215,20 +1204,25 @@ private static Stream<HoodieRecord> 
getColumnStatsRecords(String partitionPath,
     }
 
     List<HoodieColumnRangeMetadata<Comparable>> columnRangeMetadata =
-        readColumnRangeMetadataFrom(filePartitionPath, datasetMetaClient, 
columnsToIndex);
+        readColumnRangeMetadataFrom(filePartitionPath, datasetMetaClient, 
columnsToIndex, false, Option.empty());

Review Comment:
   or did we agree that we are planning to support bootstrap of these new 
partitions for MOR only when table is fully compacted? 
   whats the expected behavior as of now then? do we fail the instantiation ? 



##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java:
##########
@@ -1215,20 +1204,25 @@ private static Stream<HoodieRecord> 
getColumnStatsRecords(String partitionPath,
     }
 
     List<HoodieColumnRangeMetadata<Comparable>> columnRangeMetadata =
-        readColumnRangeMetadataFrom(filePartitionPath, datasetMetaClient, 
columnsToIndex);
+        readColumnRangeMetadataFrom(filePartitionPath, datasetMetaClient, 
columnsToIndex, false, Option.empty());

Review Comment:
   my bad. I was looking at col stats partition bootstrap (which is having a 
gap wrt bootstrap). 
   I see that for partition stats, we are fixing it in this patch. 
   ```
   convertFilesToPartitionStatsRecords
   ```
   we are good



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to