[ 
https://issues.apache.org/jira/browse/SPARK-30608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021669#comment-17021669
 ] 

Hyukjin Kwon commented on SPARK-30608:
--------------------------------------

I don't see interval type conversions are supported between Spark and 
PostgreSQL. Seems like you save as string and want it to be converted 
automatically to interval type in PostgreSQL side.
IntervalType is currently private in Spark side. So you should either save it 
as string in PostgreSQL too and cast it to interval when you need to use.

> Postgres Column Interval converts to string and cant be written back to 
> postgres
> --------------------------------------------------------------------------------
>
>                 Key: SPARK-30608
>                 URL: https://issues.apache.org/jira/browse/SPARK-30608
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.4
>            Reporter: Sumit
>            Priority: Major
>
> If we read a  "Interval" type column from postgres and try to save it back to 
> postgres, an exception is occured as during read operation postgres column is 
> converted to String and while saving back it gives an error
>  
> java.sql.BatchUpdateException: Batch entry 0 INSERT INTO test_table 
> ("dob","dob_time","dob_time_zone","duration") VALUES ('2019-05-29 
> -04','2016-08-12 10:22:31.100000-04','2016-08-12 13:22:31.100000-04','3 days 
> 10:00:00') was aborted: ERROR: column "duration" is of type interval but 
> expression is of type character varying
>  Hint: You will need to rewrite or cast the expression.
>  Position: 86 Call getNextException to see other errors in the batch.
>  at 
> org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:151)
>  at 
> org.postgresql.core.ResultHandlerDelegate.handleError(ResultHandlerDelegate.java:45)
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2159)
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:463)
>  at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:794)
>  at 
> org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1662)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:672)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:834)
>  at 
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:834)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:935)
>  at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
>  at 
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>  at org.apache.spark.scheduler.Task.run(Task.scala:123)
>  at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to