LuciferYang commented on a change in pull request #30652:
URL: https://github.com/apache/spark/pull/30652#discussion_r537996738



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetScanBuilder.scala
##########
@@ -50,7 +50,7 @@ case class ParquetScanBuilder(
     val pushDownInFilterThreshold = 
sqlConf.parquetFilterPushDownInFilterThreshold
     val isCaseSensitive = sqlConf.caseSensitiveAnalysis
     val parquetSchema =
-      new 
SparkToParquetSchemaConverter(sparkSession.sessionState.conf).convert(schema)

Review comment:
       This is a good question, but it seems that the filter pushed down in 
DataSource V1 does not contain the filter related to partition columns too. 
   
   The `dataFilters` use to construct `FileSourceScanExec` and pass to 
`ParquetFileFormat.buildReaderWithPartitionValues` to build `pushed` filters 
also filtered out partition filters, am I right?
   
   
https://github.com/apache/spark/blob/e4d1c10760800563d2a30410b46e5b0cd2671c4d/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala#L184-L193
   
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to