MaxGekk commented on a change in pull request #27441: [SPARK-30668][SQL] 
Support `SimpleDateFormat` patterns in parsing timestamps/dates strings
URL: https://github.com/apache/spark/pull/27441#discussion_r375131044
 
 

 ##########
 File path: docs/sql-migration-guide.md
 ##########
 @@ -67,9 +67,7 @@ license: |
 
   - Since Spark 3.0, Proleptic Gregorian calendar is used in parsing, 
formatting, and converting dates and timestamps as well as in extracting 
sub-components like years, days and etc. Spark 3.0 uses Java 8 API classes from 
the java.time packages that based on ISO chronology 
(https://docs.oracle.com/javase/8/docs/api/java/time/chrono/IsoChronology.html).
 In Spark version 2.4 and earlier, those operations are performed by using the 
hybrid calendar (Julian + Gregorian, see 
https://docs.oracle.com/javase/7/docs/api/java/util/GregorianCalendar.html). 
The changes impact on the results for dates before October 15, 1582 (Gregorian) 
and affect on the following Spark 3.0 API:
 
-    - CSV/JSON datasources use java.time API for parsing and generating 
CSV/JSON content. In Spark version 2.4 and earlier, java.text.SimpleDateFormat 
is used for the same purpose with fallbacks to the parsing mechanisms of Spark 
2.0 and 1.x. For example, `2018-12-08 10:39:21.123` with the pattern 
`yyyy-MM-dd'T'HH:mm:ss.SSS` cannot be parsed since Spark 3.0 because the 
timestamp does not match to the pattern but it can be parsed by earlier Spark 
versions due to a fallback to `Timestamp.valueOf`. To parse the same timestamp 
since Spark 3.0, the pattern should be `yyyy-MM-dd HH:mm:ss.SSS`.
-
-    - The `unix_timestamp`, `date_format`, `to_unix_timestamp`, 
`from_unixtime`, `to_date`, `to_timestamp` functions. New implementation 
supports pattern formats as described here 
https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html
 and performs strict checking of its input. For example, the `2015-07-22 
10:00:00` timestamp cannot be parse if pattern is `yyyy-MM-dd` because the 
parser does not consume whole input. Another example is the `31/01/2015 00:00` 
input cannot be parsed by the `dd/MM/yyyy hh:mm` pattern because `hh` supposes 
hours in the range `1-12`.
+    - Parsing/formatting of timestamp/date strings. This effects on CSV/JSON 
datasources and on the `unix_timestamp`, `date_format`, `to_unix_timestamp`, 
`from_unixtime`, `to_date`, `to_timestamp` functions when patterns specified by 
users is used for parsing and formatting. Since Spark 3.0, the conversions are 
based on `java.time.format.DateTimeFormatter`, see 
https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html.
 New implementation performs strict checking of its input. For example, the 
`2015-07-22 10:00:00` timestamp cannot be parse if pattern is `yyyy-MM-dd` 
because the parser does not consume whole input. Another example is the 
`31/01/2015 00:00` input cannot be parsed by the `dd/MM/yyyy hh:mm` pattern 
because `hh` supposes hours in the range `1-12`. In Spark version 2.4 and 
earlier, `java.text.SimpleDateFormat` is used for timestamp/date string 
conversions, and the supported patterns are described in 
https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html. The 
old behavior can be restored by setting `spark.sql.legacy.timeParser.enabled` 
to `true`.
 
 Review comment:
   Yes, it is related because `SimpleDateFormat` and `DateTimeFormatter` use 
different calendars underneath. Slightly different patterns are just a 
consequence of switching.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to