[
https://issues.apache.org/jira/browse/SPARK-23402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16363539#comment-16363539
]
kevin yu commented on SPARK-23402:
----------------------------------
Yes, I create empty table (emptytable) in database (mydb) in the postgresql,
then I run the above statement from spark-shell, it works fine. The only
difference I see is that my postgres is at 9.5.6, yours is at 9.5.8 +.
> Dataset write method not working as expected for postgresql database
> --------------------------------------------------------------------
>
> Key: SPARK-23402
> URL: https://issues.apache.org/jira/browse/SPARK-23402
> Project: Spark
> Issue Type: Bug
> Components: Spark Core, SQL
> Affects Versions: 2.2.1
> Environment: PostgreSQL: 9.5.8 (10 + Also same issue)
> OS: Cent OS 7 & Windows 7,8
> JDBC: 9.4-1201-jdbc41
>
> Spark: I executed in both 2.1.0 and 2.2.1
> Mode: Standalone
> OS: Windows 7
> Reporter: Pallapothu Jyothi Swaroop
> Priority: Major
> Attachments: Emsku[1].jpg
>
>
> I am using spark dataset write to insert data on postgresql existing table.
> For this I am using write method mode as append mode. While using i am
> getting exception like table already exists. But, I gave option as append
> mode.
> It's strange. When i change options to sqlserver/oracle append mode is
> working as expected.
>
> *Database Properties:*
> {{destinationProps.put("driver", "org.postgresql.Driver");
> destinationProps.put("url", "jdbc:postgresql://127.0.0.1:30001/dbmig");
> destinationProps.put("user", "dbmig");}}
> {{destinationProps.put("password", "dbmig");}}
>
> *Dataset Write Code:*
> {{valueAnalysisDataset.write().mode(SaveMode.Append).jdbc(destinationDbMap.get("url"),
> "dqvalue", destinationdbProperties);}}
>
>
> {{Exception in thread "main" org.postgresql.util.PSQLException: ERROR:
> relation "dqvalue" already exists at
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2412)
> at
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2125)
> at
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:297)
> at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428) at
> org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354) at
> org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:301) at
> org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:287) at
> org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:264) at
> org.postgresql.jdbc.PgStatement.executeUpdate(PgStatement.java:244) at
> org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:806)
> at
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:95)
> at
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:469)
> at
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:50)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135) at
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116) at
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
> at
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
> at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:609)
> at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233) at
> org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:460) at
> com.ads.dqam.action.impl.PostgresValueAnalysis.persistValueAnalysis(PostgresValueAnalysis.java:25)
> at
> com.ads.dqam.action.AbstractValueAnalysis.persistAnalysis(AbstractValueAnalysis.java:81)
> at com.ads.dqam.Analysis.doAnalysis(Analysis.java:32) at
> com.ads.dqam.Client.main(Client.java:71)}}
>
>
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]