Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20023#discussion_r162246614
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1795,6 +1795,11 @@ options.
     
      - Since Spark 2.3, when all inputs are binary, SQL `elt()` returns an 
output as binary. Otherwise, it returns as a string. Until Spark 2.3, it always 
returns as a string despite of input types. To keep the old behavior, set 
`spark.sql.function.eltOutputAsString` to `true`.
     
    + - Since Spark 2.3, by default arithmetic operations between decimals 
return a rounded value if an exact representation is not possible. This is 
compliant to SQL standards and Hive's behavior introduced in HIVE-15331. This 
involves the following changes
    +    - The rules to determine the result type of an arithmetic operation 
have been updated. In particular, if the precision / scale needed are out of 
the range of available values, the scale is reduced up to 6, in order to 
prevent the truncation of the integer part of the decimals.
    +    - Literal values used in SQL operations are converted to DECIMAL with 
the exact precision and scale needed by them.
    +    - The configuration `spark.sql.decimalOperations.allowPrecisionLoss` 
has been introduced. It defaults to `true`, which means the new behavior 
described here; if set to `false`, Spark will use the previous rules and 
behavior.
    --- End diff --
    
    At least, we need to say, NULL will be returned in this case.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to