MaxGekk commented on a change in pull request #24181: [SPARK-27242][SQL] Make
formatting TIMESTAMP/DATE literals independent from the default time zone
URL: https://github.com/apache/spark/pull/24181#discussion_r268155001
##########
File path: docs/sql-migration-guide-upgrade.md
##########
@@ -96,13 +96,17 @@ displayTitle: Spark SQL Upgrading Guide
- The `weekofyear`, `weekday`, `dayofweek`, `date_trunc`,
`from_utc_timestamp`, `to_utc_timestamp`, and `unix_timestamp` functions use
java.time API for calculation week number of year, day number of week as well
for conversion from/to TimestampType values in UTC time zone.
- the JDBC options `lowerBound` and `upperBound` are converted to
TimestampType/DateType values in the same way as casting strings to
TimestampType/DateType values. The conversion is based on Proleptic Gregorian
calendar, and time zone defined by the SQL config `spark.sql.session.timeZone`.
In Spark version 2.4 and earlier, the conversion is based on the hybrid
calendar (Julian + Gregorian) and on default system time zone.
+
+ - Formatting of `TIMESTAMP` and `DATE` literals.
Review comment:
> Is this entry intended to be here ...
Yes, I put the entry here to mention a side effect of the PR that it
switches the calendar to Proleptic Gregorian one. But main entry points out
another aspect of the changes - using SQL config instead of default JVM
timezone for timestamp formatting.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]