MaxGekk commented on code in PR #36821:
URL: https://github.com/apache/spark/pull/36821#discussion_r893296714


##########
docs/sql-migration-guide.md:
##########
@@ -65,6 +65,8 @@ license: |
   - Since Spark 3.3, when reading values from a JSON attribute defined as 
`FloatType` or `DoubleType`, the strings `"+Infinity"`, `"+INF"`, and `"-INF"` 
are now parsed to the appropriate values, in addition to the already supported 
`"Infinity"` and `"-Infinity"` variations. This change was made to improve 
consistency with Jackson's parsing of the unquoted versions of these values. 
Also, the `allowNonNumericNumbers` option is now respected so these strings 
will now be considered invalid if this option is disabled.
 
   - Since Spark 3.3, Spark will try to use built-in data source writer instead 
of Hive serde in `INSERT OVERWRITE DIRECTORY`. This behavior is effective only 
if `spark.sql.hive.convertMetastoreParquet` or 
`spark.sql.hive.convertMetastoreOrc` is enabled respectively for Parquet and 
ORC formats. To restore the behavior before Spark 3.3, you can set 
`spark.sql.hive.convertMetastoreInsertDir` to `false`.
+  
+  - Since Spark 3.3, the precision of the return type of round-like functions 
has been fixed. This may cause Spark throw a `CANNOT_UP_CAST_DATATYPE` 
exception when using views created by prior versions. In such cases, you need 
to recreate the views using ALTER VIEW AS or CREATE OR REPLACE VIEW AS with 
newer Spark versions.

Review Comment:
   nit: more precisely: ... Spark throw AnalysisException of the 
`CANNOT_UP_CAST_DATATYPE` error class



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to