gengliangwang commented on code in PR #45621:
URL: https://github.com/apache/spark/pull/45621#discussion_r1532932026


##########
docs/sql-migration-guide.md:
##########
@@ -121,6 +122,8 @@ license: |
   - Since Spark 3.3, the `unbase64` function throws error for a malformed 
`str` input. Use `try_to_binary(<str>, 'base64')` to tolerate malformed input 
and return NULL instead. In Spark 3.2 and earlier, the `unbase64` function 
returns a best-efforts result for a malformed `str` input.
 
   - Since Spark 3.3.1 and 3.2.3, for `SELECT ... GROUP BY a GROUPING SETS 
(b)`-style SQL statements, `grouping__id` returns different values from Apache 
Spark 3.2.0, 3.2.1, 3.2.2, and 3.3.0. It computes based on user-given group-by 
expressions plus grouping set columns. To restore the behavior before 3.3.1 and 
3.2.3, you can set `spark.sql.legacy.groupingIdWithAppendedUserGroupBy`. For 
details, see [SPARK-40218](https://issues.apache.org/jira/browse/SPARK-40218) 
and [SPARK-40562](https://issues.apache.org/jira/browse/SPARK-40562).
+    
+  - In Spark 3.3/3.4/3.5 releases, when reading Parquet files that were not 
produced by Spark, Parquet timestamp columns with annotation `isAdjustedToUTC = 
false` are inferred as TIMESTAMP_NTZ type during schema inference. In Spark 3.2 
and earlier, these columns are inferred as TIMESTAMP type. To restore the 
behavior before Spark 3.3, you can set 
`spark.sql.parquet.inferTimestampNTZ.enabled` to `false`. Note that this is a 
behavior change, and it will be disabled after Spark 4.0 release. 

Review Comment:
   Another option is to change the conf in the latest 3.3/3.4/3.5 releases.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to