yabola commented on code in PR #40495:
URL: https://github.com/apache/spark/pull/40495#discussion_r1143525196


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetPartitionReaderFactory.scala:
##########
@@ -92,8 +93,13 @@ case class ParquetPartitionReaderFactory(
     if (aggregation.isEmpty) {
       ParquetFooterReader.readFooter(conf, filePath, SKIP_ROW_GROUPS)
     } else {
+      val split = new FileSplit(file.toPath, file.start, file.length, 
Array.empty[String])
       // For aggregate push down, we will get max/min/count from footer 
statistics.
-      ParquetFooterReader.readFooter(conf, filePath, NO_FILTER)

Review Comment:
   @LuciferYang @huaxingao  oh~ thank you! Yes, using `NO_FILTER` is no problem 
here.
   
https://github.com/apache/spark/blob/df2e2516188b46537349aa7a5f279de6141c6450/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetScan.scala#L50-L57
   
    But do you think it will be always safer to use file range? and I have made 
some changes here, I want to unify them  
https://github.com/apache/spark/pull/39950



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to