Github user danielvdende commented on a diff in the pull request:
https://github.com/apache/spark/pull/19911#discussion_r155330583
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/PostgresDialect.scala ---
@@ -100,5 +100,5 @@ private object PostgresDialect extends JdbcDialect {
}
- override def isCascadingTruncateTable(): Option[Boolean] = Some(true)
+ override def isCascadingTruncateTable(): Option[Boolean] = Some(false)
--- End diff --
@dongjoon-hyun indeed, Spark does not use `TRUNCATE` for Postgres, due to
the value of `JdbcUtils.isCascadingTruncateTable(url)`. The result, however, is
that it will try to drop the table, which will fail for the exact same reason
if there are tables dependent on the table to be truncated/dropped (e.g. a
foreign key constraint). I think this raises another issue, that it should be
possible to specify a `cascade` flag for these operations (but that's for
another JIRA) Moreover, the definition of the
`JdbcUtils.isCascadingTruncateTable` function is: ```Return Some[true] iff
`TRUNCATE TABLE` causes cascading default.```, which isn't the case for
Postgres (as mentioned in the JIRA)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]