sadikovi commented on code in PR #37419:
URL: https://github.com/apache/spark/pull/37419#discussion_r938492838
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala:
##########
@@ -228,6 +228,13 @@ class ParquetFileFormat
SQLConf.PARQUET_TIMESTAMP_NTZ_ENABLED.key,
sparkSession.sessionState.conf.parquetTimestampNTZEnabled)
+ // See PARQUET-2170.
+ // Disable column index optimisation when required schema is empty so we
get the correct
+ // row count from parquet-mr.
+ if (requiredSchema.isEmpty) {
Review Comment:
No, this is not required for DSv2.
The test works in DSv2 due to another inconsistency - Parquet DSv2 filters
out the column in `readDataSchema`()` method due to the fact that both
partition column and data column are similar in a case insensitive mode. The
final schema becomes empty resulting in the empty list of filters and thus
returning the correct number of records. It is rather a performance
inefficiency in DSv2 as the entire file will be scanned. However, the result
will be correct.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]