beliefer commented on code in PR #44397:
URL: https://github.com/apache/spark/pull/44397#discussion_r1429631348
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala:
##########
@@ -402,18 +402,11 @@ object JdbcUtils extends Logging with SQLConfHelper {
row.update(pos, null)
}
- // When connecting with Oracle DB through JDBC, the precision and scale of
BigDecimal
- // object returned by ResultSet.getBigDecimal is not correctly matched to
the table
- // schema reported by ResultSetMetaData.getPrecision and
ResultSetMetaData.getScale.
- // If inserting values like 19999 into a column with NUMBER(12, 2) type,
you get through
- // a BigDecimal object with scale as 0. But the dataframe schema has
correct type as
- // DecimalType(12, 2). Thus, after saving the dataframe into parquet file
and then
- // retrieve it, you will get wrong result 199.99.
- // So it is needed to set precision and scale for Decimal based on JDBC
metadata.
case DecimalType.Fixed(p, s) =>
(rs: ResultSet, row: InternalRow, pos: Int) =>
val decimal =
- nullSafeConvert[java.math.BigDecimal](rs.getBigDecimal(pos + 1), d
=> Decimal(d, p, s))
Review Comment:
The precision is 38 and scale is 38 based on `DecimalType`.
In fact, the decimal return from JDBC is `BigDecimal(7, 3)`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]