beliefer commented on code in PR #44397:
URL: https://github.com/apache/spark/pull/44397#discussion_r1429626009


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala:
##########
@@ -402,18 +402,11 @@ object JdbcUtils extends Logging with SQLConfHelper {
           row.update(pos, null)
         }
 
-    // When connecting with Oracle DB through JDBC, the precision and scale of 
BigDecimal
-    // object returned by ResultSet.getBigDecimal is not correctly matched to 
the table
-    // schema reported by ResultSetMetaData.getPrecision and 
ResultSetMetaData.getScale.
-    // If inserting values like 19999 into a column with NUMBER(12, 2) type, 
you get through
-    // a BigDecimal object with scale as 0. But the dataframe schema has 
correct type as
-    // DecimalType(12, 2). Thus, after saving the dataframe into parquet file 
and then
-    // retrieve it, you will get wrong result 199.99.
-    // So it is needed to set precision and scale for Decimal based on JDBC 
metadata.
     case DecimalType.Fixed(p, s) =>
       (rs: ResultSet, row: InternalRow, pos: Int) =>
         val decimal =
-          nullSafeConvert[java.math.BigDecimal](rs.getBigDecimal(pos + 1), d 
=> Decimal(d, p, s))

Review Comment:
   The origin code will throws exception.
   ```
   Caused by: org.apache.spark.SparkArithmeticException: 
[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 42 exceeds max 
precision 38. SQLSTATE: 22003
        at 
org.apache.spark.sql.errors.DataTypeErrors$.decimalPrecisionExceedsMaxPrecisionError(DataTypeErrors.scala:48)
        at org.apache.spark.sql.types.Decimal.set(Decimal.scala:124)
        at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:577)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$4(JdbcUtils.scala:408)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.nullSafeConvert(JdbcUtils.scala:552)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$3(JdbcUtils.scala:408)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$3$adapted(JdbcUtils.scala:406)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:358)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:339)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to