MaxGekk commented on a change in pull request #27710: [SPARK-30960][SQL] add
back the legacy date/timestamp format support in CSV/JSON parser
URL: https://github.com/apache/spark/pull/27710#discussion_r384584040
##########
File path:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/csv/UnivocityParser.scala
##########
@@ -175,10 +175,30 @@ class UnivocityParser(
}
case _: TimestampType => (d: String) =>
- nullSafeDatum(d, name, nullable, options)(timestampFormatter.parse)
+ nullSafeDatum(d, name, nullable, options) { datum =>
+ try {
+ timestampFormatter.parse(datum)
+ } catch {
+ case NonFatal(e) =>
+ // If fails to parse, then tries the way used in 2.0 and 1.x for
backwards
+ // compatibility.
+ val str = UTF8String.fromString(datum)
+ DateTimeUtils.stringToTimestamp(str,
options.zoneId).getOrElse(throw e)
Review comment:
Looking at what was removed there
https://github.com/apache/spark/pull/23150/files#diff-c82e4b74d2a51fed29069745ce4f9e96L164
, `DateTimeUtils.stringToTime` and `DateTimeUtils.stringToTimestamp` are 2
different functions, see
https://github.com/MaxGekk/spark/blob/60c0974261c947c0838457c40f4fe0e64d17ca15/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala#L173-L191
`stringToTime` has a few problems - it doesn't respect to Spark's session
time zone, and parses in the combined calendar. If you use it as a fallback,
you can get parsed values in different calendars.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]