Github user rick-ibm commented on the pull request:
https://github.com/apache/spark/pull/8963#issuecomment-154125587
A decimal value with precision 7 and scale 0 must incur an out-of-range
error when put into a decimal column which has precision 7 and scale 2. The
number has 7 digits to the left of the decimal point but the column datatype
only accepts 5 digits to the left of the decimal point. That is my reading of
the behavior defined by the 2011 SQL Standard, part 2, section 9.2 (store
assignment), general rule 3.b.xv. If Oracle allows some non-Standard treatment
when storing decimal values and Spark must support that behavior, then I think
that an Oracle-specific JdbcDialect should be created. The solution in this
pull request does not look correct to me for other SQL dialects. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]