cloud-fan commented on issue #22450: [SPARK-25454][SQL] Avoid precision loss in 
division with decimal with negative scale
URL: https://github.com/apache/spark/pull/22450#issuecomment-473195334
 
 
   > Many SQL DBs do not have this rule
   
   Can you list some of them? I checked SQL Server, PostgreSQL, MySQL, Presto, 
Hive, none of them allow negative scale.
   
   The thing I want to avoid is, fixing a specific issue without looking at the 
big picture. The big picture is, how Spark should define decimal type?
   
   There are two directions we can go:
   1. forbid negative scale to follow other databases. If we want to do this, 
we should find out the cases that will be broken, and put them in the release 
notes.
   2. allow negative scale, but also allow `scale > precision`. I would reject 
a proposal that only allows negative scale, as its definition will be hard to 
generalize.
   
   Let's recap the 2 definitions of decimal type:
   1. `decimal = unscaledValue * 10^-scale` where `precision` is the number of 
digits of the `unscaledValue`.
   2. `precision` is the number of total digits, `scale` is the number of 
digits of the fraction part.
   
   If we only allow negative scale, it fits neither of the two definitions.
   
   That said, I'm OK with either of the directions, but not something in the 
middle. Personally I'd prefer the first direction, as that's how other 
mainstream databases do.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to