[
https://issues.apache.org/jira/browse/SPARK-37217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17439206#comment-17439206
]
Apache Spark commented on SPARK-37217:
--------------------------------------
User 'cxzl25' has created a pull request for this issue:
https://github.com/apache/spark/pull/34493
> Dynamic partitions should fail quickly when writing to external tables to
> prevent data deletion
> -----------------------------------------------------------------------------------------------
>
> Key: SPARK-37217
> URL: https://issues.apache.org/jira/browse/SPARK-37217
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 3.2.0
> Reporter: dzcxzl
> Priority: Trivial
>
> [SPARK-29295|https://issues.apache.org/jira/browse/SPARK-29295] introduces a
> mechanism that writes to external tables is a dynamic partition method, and
> the data in the target partition will be deleted first.
> Assuming that 1001 partitions are written, the data of 10001 partitions will
> be deleted first, but because hive.exec.max.dynamic.partitions is 1000 by
> default, loadDynamicPartitions will fail at this time, but the data of 1001
> partitions has been deleted.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]