vinothchandar commented on a change in pull request #4060:
URL: https://github.com/apache/hudi/pull/4060#discussion_r757243343



##########
File path: 
hudi-common/src/main/java/org/apache/hudi/common/util/ParquetUtils.java
##########
@@ -286,95 +287,103 @@ public Boolean apply(String recordKey) {
 
   /**
    * Parse min/max statistics stored in parquet footers for all columns.
-   * ParquetRead.readFooter is not a thread safe method.
-   *
-   * @param conf hadoop conf.
-   * @param parquetFilePath file to be read.
-   * @param cols cols which need to collect statistics.
-   * @return a HoodieColumnRangeMetadata instance.
    */
-  public Collection<HoodieColumnRangeMetadata<Comparable>> 
readRangeFromParquetMetadata(
-      Configuration conf,
-      Path parquetFilePath,
-      List<String> cols) {
+  public List<HoodieColumnRangeMetadata<Comparable>> 
readRangeFromParquetMetadata(
+      @Nonnull Configuration conf,
+      @Nonnull Path parquetFilePath,
+      @Nonnull List<String> cols
+  ) {
     ParquetMetadata metadata = readMetadata(conf, parquetFilePath);
-    // collect stats from all parquet blocks
-    Map<String, List<HoodieColumnRangeMetadata<Comparable>>> 
columnToStatsListMap = metadata.getBlocks().stream().flatMap(blockMetaData -> {
-      return blockMetaData.getColumns().stream().filter(f -> 
cols.contains(f.getPath().toDotString())).map(columnChunkMetaData -> {
-        String minAsString;
-        String maxAsString;
-        if (columnChunkMetaData.getPrimitiveType().getOriginalType() == 
OriginalType.DATE) {
-          synchronized (lock) {
-            minAsString = columnChunkMetaData.getStatistics().minAsString();
-            maxAsString = columnChunkMetaData.getStatistics().maxAsString();
-          }
-        } else {
-          minAsString = columnChunkMetaData.getStatistics().minAsString();
-          maxAsString = columnChunkMetaData.getStatistics().maxAsString();
-        }
-        return new HoodieColumnRangeMetadata<>(parquetFilePath.getName(), 
columnChunkMetaData.getPath().toDotString(),
-            columnChunkMetaData.getStatistics().genericGetMin(),
-            columnChunkMetaData.getStatistics().genericGetMax(),
-            columnChunkMetaData.getStatistics().getNumNulls(),
-            minAsString, maxAsString);
-      });
-    }).collect(Collectors.groupingBy(e -> e.getColumnName()));
-
-    // we only intend to keep file level statistics.
-    return new ArrayList<>(columnToStatsListMap.values().stream()
-        .map(blocks -> getColumnRangeInFile(blocks))
-        .collect(Collectors.toList()));
+    // Collect stats from all individual Parquet blocks
+    Map<String, List<HoodieColumnRangeMetadata<Comparable>>> 
columnToStatsListMap =

Review comment:
       Some of these can be subjective and I still think the threading adds too 
many lines. But this has been agreed upon in the project before. So I suggest 
sticking to what is done in the project in the PRs and decouple these 
discussions on the mailing list to drive consensus, before changing course. 
   
   Changing parts of code touched in different ways, increases maintenance 
overhead and makes for an inconsistent experience.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to