AngersZhuuuu commented on a change in pull request #34616: URL: https://github.com/apache/spark/pull/34616#discussion_r750837788
########## File path: docs/sql-migration-guide.md ########## @@ -338,6 +338,12 @@ license: | - In Spark 3.0, datetime pattern letter `F` is **aligned day of week in month** that represents the concept of the count of days within the period of a week where the weeks are aligned to the start of the month. In Spark version 2.4 and earlier, it is **week of month** that represents the concept of the count of weeks within the month where weeks start on a fixed day-of-week, e.g. `2020-07-30` is 30 days (4 weeks and 2 days) after the first day of the month, so `date_format(date '2020-07-30', 'F')` returns 2 in Spark 3.0, but as a week count in Spark 2.x, it returns 5 because it locates in the 5th week of July 2020, where week one is 2020-07-01 to 07-04. + - In Spark version 2.4 and below, `spark-sql` interface will chop `\` when query match `\;`. In Spark 3.0, `spark-sql` interface will not chop `\` when query match `\;`. See [HIVE-15297](https://issues.apache.org/jira/browse/HIVE-15297) for more details. For example: Review comment: > What is the behavior of `bin/spark-shell`? `bin/spark-shell` won't be impact by this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
