Github user mgaido91 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22450#discussion_r218758428
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/DecimalPrecision.scala
 ---
    @@ -129,16 +129,17 @@ object DecimalPrecision extends TypeCoercionRule {
             resultType)
     
         case Divide(e1 @ DecimalType.Expression(p1, s1), e2 @ 
DecimalType.Expression(p2, s2)) =>
    +      val adjP2 = if (s2 < 0) p2 - s2 else p2
    --- End diff --
    
    Yes, I think this is more clear in the related JIRA description and 
comments. The problem is that here we have never handled properly decimals with 
negative scale. The point is: before 2.3, this could happen only if someone was 
creating some specific literal from a BigDecimal, like 
`lit(BigDecimal(100e6))`; since 2.3, this can happen with every constant like 
100e6 in the SQL code. So the problem has been there for a while, but we 
haven't seen it because it was less likely to happen.
    
    Another solution would be avoiding having decimals with a negative scale. 
But this is quite a breaking change, so I'd avoid until a 3.0 release at least.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to