GitHub user taroplus opened a pull request:

    https://github.com/apache/spark/pull/19548

    [SPARK-22303][SQL] Handle Oracle specific jdbc types in OracleDialect

    TIMESTAMP (-101), BINARY_DOUBLE (101) and BINARY_FLOAT (100) are handled in 
OracleDialect
    
    ## What changes were proposed in this pull request?
    
    When a oracle table contains columns whose type is BINARY_FLOAT or 
BINARY_DOUBLE, spark sql fails to load a table with SQLException
    
    ```
    java.sql.SQLException: Unsupported type 101
     at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$getCatalystType(JdbcUtils.scala:235)
     at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:292)
     at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$8.apply(JdbcUtils.scala:292)
     at scala.Option.getOrElse(Option.scala:121)
     at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.getSchema(JdbcUtils.scala:291)
     at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:64)
     at 
org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:113)
     at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:47)
     at 
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:306)
     at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
     at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:146)
    ```
    
    ## How was this patch tested?
    
    I updated a UT which covers type conversion test for types (-101, 100, 
101), on top of that I tested this change against actual table with those 
columns and it was able to read and write to the table. 


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/taroplus/spark oracle_sql_types_101

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/19548.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19548
    
----
commit 51c616c6e501ad539f4e18cf604250a66edf1a2e
Author: Kohki Nishio <[email protected]>
Date:   2017-10-22T02:55:28Z

    Handle Oracle specific jdbc types in OracleDialect
    
    TIMESTAMP (-101), BINARY_DOUBLE (101) and BINARY_FLOAT (100) are handled in 
OracleDialect

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to