Github user henryr commented on a diff in the pull request:
https://github.com/apache/spark/pull/19769#discussion_r151830046
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -355,17 +362,33 @@ class ParquetFileFormat
fileSplit.getLocations,
null)
+ val sharedConf = broadcastedHadoopConf.value.value
+ // PARQUET_INT96_TIMESTAMP_CONVERSION says to apply timezone
conversions to int96 timestamps'
+ // *only* if the file was created by something other than
"parquet-mr", so check the actual
+ // writer here for this file. We have to do this per-file, as each
file in the table may
+ // have different writers.
+ def isCreatedByParquetMr(): Boolean = {
+ val footer = ParquetFileReader.readFooter(sharedConf,
fileSplit.getPath, SKIP_ROW_GROUPS)
--- End diff --
Does it make more sense to have VectorizedParquetRecordReader() do this in
initialize() rather than here? There doesn't seem to be a way to share footer
metadata across record readers, which would be helpful to avoid calling
readFooter() multiple times.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]