[ 
https://issues.apache.org/jira/browse/SPARK-14536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15514129#comment-15514129
 ] 

Suresh Thalamati commented on SPARK-14536:
------------------------------------------

[~sowen]  I am not sure why this issue got closed  as duplicate after I 
reopened. Based on the test case I tried on the master ,  this issue does not 
looks like a duplicate to me; as I mentioned in my previous comment when i 
reopened the issue.  array data type is supported for postgres.   

Repro :
On postgresdb :
create table spark_array(a int , b text[])
insert into spark_array values(1 , null)
insert into spark_array values(1 , '{"AA", "BB"}')

val psqlProps = new java.util.Properties()
psqlProps.setProperty("user" , "user")
psqlProps.setProperty("password" , "password")

-- works fine
spark.read.jdbc("jdbc:postgresql://localhost:5432/pdb", "(select * from 
spark_array where b is not null) as a ", psqlProps).show() 

-- fails with error.
spark.read.jdbc("jdbc:postgresql://localhost:5432/pdb", "spark_array", 
psqlProps).show()   fails with following error:

Stack :
16/09/21 11:49:41 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
localhost): java.lang.NullPointerException
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$13.apply(JdbcUtils.scala:442)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$org$apache$spark$sql$execution$datasources$jdbc$JdbcUtils$$makeGetter$13.apply(JdbcUtils.scala:440)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:301)
        at 
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:283)
        at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
        at 
org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
 Source)
        at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)


> NPE in JDBCRDD when array column contains nulls (postgresql)
> ------------------------------------------------------------
>
>                 Key: SPARK-14536
>                 URL: https://issues.apache.org/jira/browse/SPARK-14536
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.1
>            Reporter: Jeremy Smith
>              Labels: NullPointerException
>
> At 
> https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala#L453
>  it is assumed that the JDBC driver will definitely return a non-null `Array` 
> object from the call to `getArray`, and that in the event of a null array it 
> will return an non-null `Array` object with a null underlying array.  But as 
> you can see here 
> https://github.com/pgjdbc/pgjdbc/blob/master/pgjdbc/src/main/java/org/postgresql/jdbc/PgResultSet.java#L387
>  that isn't the case, at least for PostgreSQL.  This causes a 
> `NullPointerException` whenever an array column contains null values. It 
> seems like the PostgreSQL JDBC driver is probably doing the wrong thing, but 
> even so there should be a null check in JDBCRDD.  I'm happy to submit a PR if 
> that would be helpful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to