Github user mgaido91 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22450#discussion_r218814303
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecision.scala
 ---
    @@ -40,10 +40,13 @@ import org.apache.spark.sql.types._
      *   e1 + e2      max(s1, s2) + max(p1-s1, p2-s2) + 1     max(s1, s2)
      *   e1 - e2      max(s1, s2) + max(p1-s1, p2-s2) + 1     max(s1, s2)
      *   e1 * e2      p1 + p2 + 1                             s1 + s2
    - *   e1 / e2      p1 - s1 + s2 + max(6, s1 + p2 + 1)      max(6, s1 + p2 + 
1)
    + *   e1 / e2      max(p1-s1+s2, 0) + max(6, s1+adjP2+1)   max(6, 
s1+adjP2+1)
    --- End diff --
    
    I don't think as the other DBs I know the formula of are Hive and MS SQL 
which don't allow negative scales so they don't have this problem. The formula 
is not changed from before actually, it just handles a negative scale.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to