Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14389
One problem of decimal type in Spark SQL is, the wider type of 2 decimal
types may be illegal(exceed system limitation), then we have to truncate and
suffer precision lose. This forces us to make decisions about which functions
can accept precision lose and which can not.
Unfortunately, this is not a common problem(e.g. MySQL and Postgres don't
have this problem) so we don't have many similar systems to compare and follow.
MySQL's decimal type's max scale is half of the max precision, so the wider
type of 2 decimal types in MySQL will never exceed system limitation.
Postgres has a kind of unlimited decimal type, so it doesn't have this
problem at all.
I think MySQL's design is a good one to follow, cc @rxin @marmbrus @yhuai
what do you think?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]