Github user wangyum commented on the issue:

    https://github.com/apache/spark/pull/18266
  
    Yes, mapping to Double seems fine. this test passed:
    ```
      test("SPARK-20427/SPARK-20921: read table use custom schema by jdbc api") 
{
        // default will throw IllegalArgumentException
        val e = intercept[org.apache.spark.SparkException] {
          spark.read.jdbc(jdbcUrl, "tableWithCustomSchema", new 
Properties()).collect()
        }
        assert(e.getMessage.contains(
          "requirement failed: Decimal precision 39 exceeds max precision 38"))
    
        // custom schema can read data
        val props = new Properties()
        props.put("customDataFrameColumnTypes",
          s"ID double, N1 int, N2 boolean")
        val dfRead = spark.read.jdbc(jdbcUrl, "tableWithCustomSchema", props)
    
        val rows = dfRead.collect()
        // verify the data type
        val types = rows(0).toSeq.map(x => x.getClass.toString)
        assert(types(0).equals("class java.lang.Double"))
        assert(types(1).equals("class java.lang.Integer"))
        assert(types(2).equals("class java.lang.Boolean"))
    
        // verify the value
        val values = rows(0)
        assert(values.getDouble(0).equals(12312321321321312312312312123D))
        assert(values.getInt(1).equals(1))
        assert(values.getBoolean(2).equals(false))
      }
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to