Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/22450
  
    My major concern is:
    1. how to prove `Divide` is the only one having problems of negative scale?
    2. how to prove the fix here is corrected and covers all the corner cases?
    
    I'm reconsidering this proposal
    ```
    Do you think it makes sense for 
spark.sql.decimalOperations.allowPrecisionLoss to also toggle how literal 
promotion happens (the old way vs. the new way)?
    ``` 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to