Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20023#discussion_r161502866
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecision.scala
 ---
    @@ -42,8 +43,10 @@ import org.apache.spark.sql.types._
      *   e1 / e2      p1 - s1 + s2 + max(6, s1 + p2 + 1)      max(6, s1 + p2 + 
1)
      *   e1 % e2      min(p1-s1, p2-s2) + max(s1, s2)         max(s1, s2)
      *   e1 union e2  max(s1, s2) + max(p1-s1, p2-s2)         max(s1, s2)
    - *   sum(e1)      p1 + 10                                 s1
    - *   avg(e1)      p1 + 4                                  s1 + 4
    + *
    + * When `spark.sql.decimalOperations.allowTruncat` is set to true, if the 
precision / scale needed
    + * are out of the range of available values, the scale is reduced up to 6, 
in order to prevent the
    --- End diff --
    
    Did any open source RDBMS have this rule?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to