sadikovi commented on code in PR #37419:
URL: https://github.com/apache/spark/pull/37419#discussion_r938492838


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:
##########
@@ -228,6 +228,13 @@ class ParquetFileFormat
       SQLConf.PARQUET_TIMESTAMP_NTZ_ENABLED.key,
       sparkSession.sessionState.conf.parquetTimestampNTZEnabled)
 
+    // See PARQUET-2170.
+    // Disable column index optimisation when required schema is empty so we 
get the correct
+    // row count from parquet-mr.
+    if (requiredSchema.isEmpty) {

Review Comment:
   No, this is not required for DSv2.
   
   The test works in DSv2 due to another inconsistency - Parquet DSv2 does not 
consider the full file schema when creating pushdown filters. There is a check 
in FileScanBuilder to ignore partition columns so in this case, the schema is 
empty so no filters will be pushed down, returning the correct number of 
records. It is rather a performance inefficiency in DSv2 as the entire file 
will be scanned. However, the result will be correct.
   
   I thought about fixing it the same way DSv2 fixed the issue but it is a much 
bigger change as it would affect not just this case but others as well. I hope 
my explanation makes sense. Let me know your thoughts.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to