Github user danielvdende commented on a diff in the pull request:
https://github.com/apache/spark/pull/20057#discussion_r168921808
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -119,6 +119,8 @@ class JDBCOptions(
// ------------------------------------------------------------
// if to truncate the table from the JDBC database
val isTruncate = parameters.getOrElse(JDBC_TRUNCATE, "false").toBoolean
+
+ val isCascadeTruncate: Option[Boolean] =
parameters.get(JDBC_CASCADE_TRUNCATE).map(_.toBoolean)
--- End diff --
The reason I didn't do that is because of the existence of the
`isCascadingTruncateTable` function for each dialect. According to the docs,
that indicates whether or not a `TRUNCATE TABLE` command results in cascading
behaviour by default for a given dialect. I thought it would be nice to then
use that value as the default value for `isCascadeTruncate`. In that way, if
there ever is a dialect that cascades truncations by default, we don't
'hardcode' a default value.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]