This is an automated email from the ASF dual-hosted git repository. yao pushed a commit to branch branch-4.0 in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-4.0 by this push: new 47be8822902e [SPARK-47791][SQL][FOLLOWUP] Avoid invalid JDBC decimal scale 47be8822902e is described below commit 47be8822902ed2e9a218367bceea6809a9af50e4 Author: Wenchen Fan <wenc...@databricks.com> AuthorDate: Wed Apr 23 17:24:05 2025 +0800 [SPARK-47791][SQL][FOLLOWUP] Avoid invalid JDBC decimal scale ### What changes were proposed in this pull request? This is a follow-up of https://github.com/apache/spark/pull/45976 to fix a regression. Some JDBC dialects may have decimal scale larger than precision. Before https://github.com/apache/spark/pull/45976 , we used `DecimalType.bounded` which limits the scale to 38 and avoids this issue in some cases. But `DecimalPrecisionTypeCoercion.bounded` does not limit decimal scale and is more likely to hit the issue. This PR proposes to make JDBC decimal scale no larger than precision. Note: we can also fix `DecimalPrecisionTypeCoercion.bounded` but I'd like to reduce the blast radius here. ### Why are the changes needed? fix a regression that some JDBC queries start to fail after https://github.com/apache/spark/pull/45976 ### Does this PR introduce _any_ user-facing change? no, the issue is not release yet. ### How was this patch tested? The change is very obvious, but testing needs to install other dialects so skip it ### Was this patch authored or co-authored using generative AI tooling? no Closes #50673 from cloud-fan/decimal. Lead-authored-by: Wenchen Fan <wenc...@databricks.com> Co-authored-by: Wenchen Fan <cloud0...@gmail.com> Signed-off-by: Kent Yao <y...@apache.org> (cherry picked from commit 24511dd30e2c7690f8c827a47b30dcb3d1a60cde) Signed-off-by: Kent Yao <y...@apache.org> --- .../org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala index 651c29d09766..bbc5dd93b480 100644 --- a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala +++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala @@ -203,7 +203,11 @@ object JdbcUtils extends Logging with SQLConfHelper { case java.sql.Types.DECIMAL | java.sql.Types.NUMERIC if scale < 0 => DecimalType.bounded(precision - scale, 0) case java.sql.Types.DECIMAL | java.sql.Types.NUMERIC => - DecimalPrecisionTypeCoercion.bounded(precision, scale) + DecimalPrecisionTypeCoercion.bounded( + // A safeguard in case the JDBC scale is larger than the precision that is not supported + // by Spark. + math.max(precision, scale), + scale) case java.sql.Types.DOUBLE => DoubleType case java.sql.Types.FLOAT => FloatType case java.sql.Types.INTEGER => if (signed) IntegerType else LongType --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org