This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
     new e26db0153ac [SPARK-39226][DOCS][FOLLOWUP] Update the migration guide 
after fixing the precision of the return type of round-like functions
e26db0153ac is described below

commit e26db0153ac25ae1b1759eb780da70dbd598b0d4
Author: Yuming Wang <yumw...@ebay.com>
AuthorDate: Thu Jun 9 16:43:31 2022 +0300

    [SPARK-39226][DOCS][FOLLOWUP] Update the migration guide after fixing the 
precision of the return type of round-like functions
    
    ### What changes were proposed in this pull request?
    
    Update the migration guide after fixing the precision of the return type of 
round-like functions.
    
    How to reproduce this issue:
    ```sql
    -- Spark 3.2
    CREATE TABLE t1(CURNCY_AMT DECIMAL(18,6)) using parquet;
    CREATE VIEW v1 AS SELECT BROUND(CURNCY_AMT, 6) AS CURNCY_AMT FROM t1;
    ```
    ```sql
    -- Spark 3.3
     SELECT * FROM v1;
    org.apache.spark.sql.AnalysisException: [CANNOT_UP_CAST_DATATYPE] Cannot up 
cast CURNCY_AMT from "DECIMAL(19,6)" to "DECIMAL(18,6)".
    ```
    
    ### Why are the changes needed?
    
    Update the migration guide.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    N/A
    
    Closes #36821 from wangyum/SPARK-39226.
    
    Authored-by: Yuming Wang <yumw...@ebay.com>
    Signed-off-by: Max Gekk <max.g...@gmail.com>
    (cherry picked from commit 105379406a371624569ac820e30d45fee3f017fc)
    Signed-off-by: Max Gekk <max.g...@gmail.com>
---
 docs/sql-migration-guide.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md
index a7757d6c9a0..7601355f6d5 100644
--- a/docs/sql-migration-guide.md
+++ b/docs/sql-migration-guide.md
@@ -65,6 +65,8 @@ license: |
   - Since Spark 3.3, when reading values from a JSON attribute defined as 
`FloatType` or `DoubleType`, the strings `"+Infinity"`, `"+INF"`, and `"-INF"` 
are now parsed to the appropriate values, in addition to the already supported 
`"Infinity"` and `"-Infinity"` variations. This change was made to improve 
consistency with Jackson's parsing of the unquoted versions of these values. 
Also, the `allowNonNumericNumbers` option is now respected so these strings 
will now be considered invalid i [...]
 
   - Since Spark 3.3, Spark will try to use built-in data source writer instead 
of Hive serde in `INSERT OVERWRITE DIRECTORY`. This behavior is effective only 
if `spark.sql.hive.convertMetastoreParquet` or 
`spark.sql.hive.convertMetastoreOrc` is enabled respectively for Parquet and 
ORC formats. To restore the behavior before Spark 3.3, you can set 
`spark.sql.hive.convertMetastoreInsertDir` to `false`.
+  
+  - Since Spark 3.3, the precision of the return type of round-like functions 
has been fixed. This may cause Spark throw `AnalysisException` of the 
`CANNOT_UP_CAST_DATATYPE` error class when using views created by prior 
versions. In such cases, you need to recreate the views using ALTER VIEW AS or 
CREATE OR REPLACE VIEW AS with newer Spark versions.
 
 ## Upgrading from Spark SQL 3.1 to 3.2
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to