Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72005232
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -407,84 +496,8 @@ private[sql] class JDBCRDD(
var i = 0
while (i < conversions.length) {
val pos = i + 1
- conversions(i) match {
- case BooleanConversion => mutableRow.setBoolean(i,
rs.getBoolean(pos))
- case DateConversion =>
- // DateTimeUtils.fromJavaDate does not handle null value, so
we need to check it.
- val dateVal = rs.getDate(pos)
- if (dateVal != null) {
- mutableRow.setInt(i, DateTimeUtils.fromJavaDate(dateVal))
- } else {
- mutableRow.update(i, null)
- }
- // When connecting with Oracle DB through JDBC, the precision
and scale of BigDecimal
- // object returned by ResultSet.getBigDecimal is not correctly
matched to the table
- // schema reported by ResultSetMetaData.getPrecision and
ResultSetMetaData.getScale.
- // If inserting values like 19999 into a column with
NUMBER(12, 2) type, you get through
- // a BigDecimal object with scale as 0. But the dataframe
schema has correct type as
- // DecimalType(12, 2). Thus, after saving the dataframe into
parquet file and then
- // retrieve it, you will get wrong result 199.99.
- // So it is needed to set precision and scale for Decimal
based on JDBC metadata.
--- End diff --
Oh, my mistake. I will add this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]