cloud-fan commented on a change in pull request #28593:
URL: https://github.com/apache/spark/pull/28593#discussion_r434305068



##########
File path: docs/sql-migration-guide.md
##########
@@ -27,7 +27,9 @@ license: |
   - In Spark 3.1, grouping_id() returns long values. In Spark version 3.0 and 
earlier, this function returns int values. To restore the behavior before Spark 
3.0, you can set `spark.sql.legacy.integerGroupingId` to `true`.
 
   - In Spark 3.1, SQL UI data adopts the `formatted` mode for the query plan 
explain results. To restore the behavior before Spark 3.0, you can set 
`spark.sql.ui.explainMode` to `extended`.
-
+  
+  - In Spark 3.1, casting numeric to timestamp and  will be forbidden by 
default, user can enable it by setting 
spark.sql.legacy.allowCastNumericToTimestamp to true, and 
functions(TIMESTAMP_SECONDS/TIMESTAMP_MILLIS/TIMESTAMP_MICROS) are strongly 
recommended to avoid possible inaccurate scenes, 
[SPARK-31710](https://issues.apache.org/jira/browse/SPARK-31710) for more 
details.

Review comment:
       ```
   In Spark 3.1, casting numeric to timestamp will be forbidden by default. 
It's strongly recommended to
   use dedicated functions: TIMESTAMP_SECONDS, TIMESTAMP_MILLIS and 
TIMESTAMP_MICROS. Or you
   can setting `spark.sql.legacy.allowCastNumericToTimestamp` to true to work 
around it. See more details
   in SPARK-31710.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to