beliefer commented on code in PR #44398:
URL: https://github.com/apache/spark/pull/44398#discussion_r1432515207
##########
sql/core/src/main/scala/org/apache/spark/sql/jdbc/H2Dialect.scala:
##########
@@ -57,6 +57,22 @@ private[sql] object H2Dialect extends JdbcDialect {
override def isSupportedFunction(funcName: String): Boolean =
supportedFunctions.contains(funcName)
+ override def getCatalystType(
+ sqlType: Int, typeName: String, size: Int, md: MetadataBuilder):
Option[DataType] = {
+ sqlType match {
+ case Types.NUMERIC =>
+ val scale = if (null != md) md.build().getLong("scale") else 0L
+ size match {
+ // SPARK-46443: Decimal precision and scale should decided by H2
dialect.
+ // Handle NUMBER fields that have incorrect precision/scale in
special way
+ // because JDBC ResultSetMetaData returns 100000 precision and 50000
scale
+ case 100000 if scale == 50000 =>
Option(DecimalType(DecimalType.MAX_PRECISION, 19))
Review Comment:
I am doubt that H2 may only have this particular situation.
Other situations greater than 38 have not been actually verified. Can we
wait until we encounter other exceptions in the future before expanding?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]