[
https://issues.apache.org/jira/browse/SPARK-16402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15785990#comment-15785990
]
Nicholas Chammas commented on SPARK-16402:
------------------------------------------
[~JustinPihony], [~smilegator] - Does the resolution on SPARK-14525 also
resolve this issue?
> JDBC source: Implement save API
> -------------------------------
>
> Key: SPARK-16402
> URL: https://issues.apache.org/jira/browse/SPARK-16402
> Project: Spark
> Issue Type: New Feature
> Components: SQL
> Affects Versions: 2.0.0
> Reporter: Xiao Li
>
> Currently, we are unable to call the `save` API of `DataFrameWriter` when the
> source is JDBC. For example,
> {noformat}
> df.write
> .format("jdbc")
> .option("url", url1)
> .option("dbtable", "TEST.TRUNCATETEST")
> .option("user", "testUser")
> .option("password", "testPass")
> .save()
> {noformat}
> The error message users will get is like
> {noformat}
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider does not
> allow create table as select.
> java.lang.RuntimeException:
> org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider does not
> allow create table as select.
> {noformat}
> However, the `save` API is very common for all the data sources, like parquet.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]