beliefer commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1429868263


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala:
##########
@@ -105,6 +105,25 @@ abstract class JdbcDialect extends Serializable with 
Logging {
    */
   def getJDBCType(dt: DataType): Option[JdbcType] = None
 
+  /**
+   * Converts an instance of `java.math.BigDecimal` to a `Decimal` value.
+   * @param d represents a specific `java.math.BigDecimal`.
+   * @param precision the precision for Decimal based on JDBC metadata.
+   * @param scale the scale for Decimal based on JDBC metadata.
+   * @return the `Decimal` value to convert to
+   */
+  @Since("4.0.0")
+  def convertBigDecimalToDecimal(d: BigDecimal, precision: Int, scale: Int): 
Decimal =
+    // When connecting with Oracle DB through JDBC, the precision and scale of 
BigDecimal
+    // object returned by ResultSet.getBigDecimal is not correctly matched to 
the table
+    // schema reported by ResultSetMetaData.getPrecision and 
ResultSetMetaData.getScale.
+    // If inserting values like 19999 into a column with NUMBER(12, 2) type, 
you get through
+    // a BigDecimal object with scale as 0. But the dataframe schema has 
correct type as
+    // DecimalType(12, 2). Thus, after saving the dataframe into parquet file 
and then
+    // retrieve it, you will get wrong result 199.99.
+    // So it is needed to set precision and scale for Decimal based on JDBC 
metadata.
+    Decimal(d, precision, scale)

Review Comment:
   I don't know the background. So I let it as the default implementation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to