cloud-fan commented on code in PR #45956:
URL: https://github.com/apache/spark/pull/45956#discussion_r1557623773
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala:
##########
@@ -196,9 +196,13 @@ object JdbcUtils extends Logging with SQLConfHelper {
case java.sql.Types.CHAR => CharType(precision)
case java.sql.Types.CLOB => StringType
case java.sql.Types.DATE => DateType
- case java.sql.Types.DECIMAL if precision != 0 || scale != 0 =>
- DecimalType.bounded(precision, scale)
- case java.sql.Types.DECIMAL => DecimalType.SYSTEM_DEFAULT
+ case java.sql.Types.DECIMAL | java.sql.Types.NUMERIC if precision != 0 ||
scale != 0 =>
+ if (scale < 0) {
+ DecimalPrecision.bounded(precision - scale, 0)
Review Comment:
Actually, I think these two changes are orthogonal. To support negative
scale, we turn scale to 0, then it doesn't matter how we truncate the
precision, as scale is already 0.
For non-negative scale, we can use a different way to truncate the
precision. It's better to do it in a separate PR as it's a behavior change.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]