cloud-fan commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1434034597


##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/H2Dialect.scala:
##########
@@ -57,6 +57,20 @@ private[sql] object H2Dialect extends JdbcDialect {
   override def isSupportedFunction(funcName: String): Boolean =
     supportedFunctions.contains(funcName)
 
+  override def getCatalystType(
+      sqlType: Int, typeName: String, size: Int, md: MetadataBuilder): 
Option[DataType] = {
+    sqlType match {
+      case Types.NUMERIC if size > 38 =>
+        // Handle NUMBER fields that have incorrect precision/scale in special 
way
+        // because the precision and scale of H2 must be from 1 to 100000. 
Adjust the precision
+        // and scale of Decimal type according to the ratio of precision and 
scale.

Review Comment:
   let's make the comment a bit more clearer:
   ```
   H2 supports very large decimal precision like 100000. The max precision in 
Spark is only 38.
   Here we shrink both the precision and scale of H2 decimal to fit Spark, and 
still keep the ratio between them.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to