Github user danielvdende commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20057#discussion_r168921851
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1372,6 +1372,13 @@ the following case-insensitive options:
          This is a JDBC writer related option. When 
<code>SaveMode.Overwrite</code> is enabled, this option causes Spark to 
truncate an existing table instead of dropping and recreating it. This can be 
more efficient, and prevents the table metadata (e.g., indices) from being 
removed. However, it will not work in some cases, such as when the new data has 
a different schema. It defaults to <code>false</code>. This option applies only 
to writing.
        </td>
       </tr>
    +  
    +  <tr>
    +    <td><code>cascadeTruncate</code></td>
    +    <td>
    +        This is a JDBC writer related option. If enabled and supported by 
the JDBC database (PostgreSQL and Oracle at the moment), this options allows 
execution of a <code>TRUNCATE TABLE t CASCADE</code>. This will affect other 
tables, and thus should be used with care. This option applies only to writing.
    --- End diff --
    
    As mentioned in another comment, I think we should use the value of 
`isCascadingTruncateTable` as the default, rather than always `false`. Seems 
like the correct use of that variable. I can add a sentence to the docs 
specifying that.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to