EnricoMi commented on code in PR #38312:
URL: https://github.com/apache/spark/pull/38312#discussion_r1001088325
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaConverter.scala:
##########
@@ -271,6 +271,8 @@ class ParquetToSparkSchemaConverter(
} else {
TimestampNTZType
}
+ case timestamp: TimestampLogicalTypeAnnotation if timestamp.getUnit
== TimeUnit.NANOS =>
Review Comment:
Yes, the (earlier) existing behaviour was useful and should be restored
until properly supported as typed nano timestamp. Unless a workaround can be
found that restores the earlier behaviour.
Does providing a schema (`spark.read.schema(...)`) with long type override
the parquet timestamp type from the parquet file? Would that be a workaround?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]