This is an automated email from the ASF dual-hosted git repository.

sarutak pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new dd6383f  [SPARK-34333][SQL] Fix PostgresDialect to handle money types 
properly
dd6383f is described below

commit dd6383f0a3917346c0e29252d3e160ed9a77144c
Author: Kousuke Saruta <[email protected]>
AuthorDate: Wed Feb 17 10:50:06 2021 +0900

    [SPARK-34333][SQL] Fix PostgresDialect to handle money types properly
    
    ### What changes were proposed in this pull request?
    
    This PR changes the type mapping for `money` and `money[]`  types for 
PostgreSQL.
    Currently, those types are tried to convert to `DoubleType` and `ArrayType` 
of `double` respectively.
    But the JDBC driver seems not to be able to handle those types properly.
    
    https://github.com/pgjdbc/pgjdbc/issues/100
    https://github.com/pgjdbc/pgjdbc/issues/1405
    
    Due to these issue, we can get the error like as follows.
    
    money type.
    ```
    [info]   org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 
0.0 (TID 0) (192.168.1.204 executor driver): org.postgresql.util.PSQLException: 
Bad value for type double : 1,000.00
    [info]      at 
org.postgresql.jdbc.PgResultSet.toDouble(PgResultSet.java:3104)
    [info]      at 
org.postgresql.jdbc.PgResultSet.getDouble(PgResultSet.java:2432)
    [info]      at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$5(JdbcUtils.scala:418)
    ```
    
    money[] type.
    ```
    [info]   org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 
0.0 (TID 0) (192.168.1.204 executor driver): org.postgresql.util.PSQLException: 
Bad value for type double : $2,000.00
    [info]      at 
org.postgresql.jdbc.PgResultSet.toDouble(PgResultSet.java:3104)
    [info]      at 
org.postgresql.jdbc.ArrayDecoding$5.parseValue(ArrayDecoding.java:235)
    [info]      at 
org.postgresql.jdbc.ArrayDecoding$AbstractObjectStringArrayDecoder.populateFromString(ArrayDecoding.java:122)
    [info]      at 
org.postgresql.jdbc.ArrayDecoding.readStringArray(ArrayDecoding.java:764)
    [info]      at org.postgresql.jdbc.PgArray.buildArray(PgArray.java:310)
    [info]      at org.postgresql.jdbc.PgArray.getArrayImpl(PgArray.java:171)
    [info]      at org.postgresql.jdbc.PgArray.getArray(PgArray.java:111)
    ```
    
    For money type, a known workaround is to treat it as string so this PR do 
it.
    For money[], however, there is no reasonable workaround so this PR remove 
the support.
    
    ### Why are the changes needed?
    
    This is a bug.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes. As of this PR merged, money type is mapped to `StringType` rather than 
`DoubleType` and the support for money[] is stopped.
    For money type, if the value is less than one thousand,  `$100.00` for 
instance, it works without this change so I also updated the migration guide 
because it's a behavior change for such small values.
    On the other hand, money[] seems not to work with any value but mentioned 
in the migration guide just in case.
    
    ### How was this patch tested?
    
    New test.
    
    Closes #31442 from sarutak/fix-for-money-type.
    
    Authored-by: Kousuke Saruta <[email protected]>
    Signed-off-by: Kousuke Saruta <[email protected]>
---
 docs/sql-migration-guide.md                                 |  2 ++
 .../apache/spark/sql/jdbc/PostgresIntegrationSuite.scala    | 13 +++++++++++++
 .../scala/org/apache/spark/sql/jdbc/PostgresDialect.scala   | 13 +++++++++++--
 3 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md
index 41154e5..a6101c6 100644
--- a/docs/sql-migration-guide.md
+++ b/docs/sql-migration-guide.md
@@ -24,6 +24,8 @@ license: |
 
 ## Upgrading from Spark SQL 3.1 to 3.2
 
+  - In Spark 3.2, PostgreSQL JDBC dialect uses StringType for MONEY and 
MONEY[] is not supported due to the JDBC driver for PostgreSQL can't handle 
those types properly. In Spark 3.1 or earlier, DoubleType and ArrayType of 
DoubleType are used respectively.
+
   - In Spark 3.2, `spark.sql.adaptive.enabled` is enabled by default. To 
restore the behavior before Spark 3.2, you can set `spark.sql.adaptive.enabled` 
to `false`.
 
   - In Spark 3.2, the following meta-characters are escaped in the `show()` 
action. In Spark 3.1 or earlier, the following metacharacters are output as it 
is.
diff --git 
a/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
 
b/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
index eb1ee51..b9336d3 100644
--- 
a/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
+++ 
b/external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
@@ -133,6 +133,11 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
     conn.prepareStatement("INSERT INTO char_array_types VALUES " +
       """('{"a", "bcd"}', '{"ef", "gh"}', '{"i", "j", "kl"}', '{"mnop"}', 
'{"q", "r"}')"""
     ).executeUpdate()
+
+    conn.prepareStatement("CREATE TABLE money_types (" +
+      "c0 money)").executeUpdate()
+    conn.prepareStatement("INSERT INTO money_types VALUES " +
+      "('$1,000.00')").executeUpdate()
   }
 
   test("Type mapping for various types") {
@@ -366,4 +371,12 @@ class PostgresIntegrationSuite extends 
DockerJDBCIntegrationSuite {
     assert(row(0).getSeq[String](3) === Seq("mnop"))
     assert(row(0).getSeq[String](4) === Seq("q", "r"))
   }
+
+  test("SPARK-34333: money type tests") {
+    val df = sqlContext.read.jdbc(jdbcUrl, "money_types", new Properties)
+    val row = df.collect()
+    assert(row.length === 1)
+    assert(row(0).length === 1)
+    assert(row(0).getString(0) === "$1,000.00")
+  }
 }
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
index f47a13d..3ce785e 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala
@@ -35,8 +35,12 @@ private object PostgresDialect extends JdbcDialect {
       Some(FloatType)
     } else if (sqlType == Types.SMALLINT) {
       Some(ShortType)
-    } else if (sqlType == Types.BIT && typeName.equals("bit") && size != 1) {
+    } else if (sqlType == Types.BIT && typeName == "bit" && size != 1) {
       Some(BinaryType)
+    } else if (sqlType == Types.DOUBLE && typeName == "money") {
+      // money type seems to be broken but one workaround is to handle it as 
string.
+      // See SPARK-34333 and https://github.com/pgjdbc/pgjdbc/issues/100
+      Some(StringType)
     } else if (sqlType == Types.OTHER) {
       Some(StringType)
     } else if (sqlType == Types.ARRAY) {
@@ -56,7 +60,7 @@ private object PostgresDialect extends JdbcDialect {
     case "int4" => Some(IntegerType)
     case "int8" | "oid" => Some(LongType)
     case "float4" => Some(FloatType)
-    case "money" | "float8" => Some(DoubleType)
+    case "float8" => Some(DoubleType)
     case "text" | "varchar" | "char" | "bpchar" | "cidr" | "inet" | "json" | 
"jsonb" | "uuid" |
          "xml" | "tsvector" | "tsquery" | "macaddr" | "macaddr8" | 
"txid_snapshot" | "point" |
          "line" | "lseg" | "box" | "path" | "polygon" | "circle" | "pg_lsn" | 
"varbit" |
@@ -69,6 +73,11 @@ private object PostgresDialect extends JdbcDialect {
     case "numeric" | "decimal" =>
       // SPARK-26538: handle numeric without explicit precision and scale.
       Some(DecimalType. SYSTEM_DEFAULT)
+    case "money" =>
+      // money[] type seems to be broken and difficult to handle.
+      // So this method returns None for now.
+      // See SPARK-34333 and https://github.com/pgjdbc/pgjdbc/issues/1405
+      None
     case _ => None
   }
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to