yabola commented on code in PR #40495: URL: https://github.com/apache/spark/pull/40495#discussion_r1142841503
########## sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala: ########## @@ -92,8 +93,13 @@ case class ParquetPartitionReaderFactory( if (aggregation.isEmpty) { ParquetFooterReader.readFooter(conf, filePath, SKIP_ROW_GROUPS) } else { + val split = new FileSplit(file.toPath, file.start, file.length, Array.empty[String]) // For aggregate push down, we will get max/min/count from footer statistics. - ParquetFooterReader.readFooter(conf, filePath, NO_FILTER) Review Comment: @huaxingao Hi~when reviewing the code, I have a point of doubt, why not take the RowGroup metrics information in the current file range here? Here is a example in `SpecificParquetRecordReaderBase` https://github.com/apache/spark/blob/61035129a354d0b31c66908106238b12b1f2f7b0/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java#L96-L102 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org