[ 
https://issues.apache.org/jira/browse/SPARK-29166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin updated SPARK-29166:
-------------------------------
    Description: 
Dynamic partition in Hive table has some restrictions to limit the max number 
of partitions. See 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-DynamicPartitionInserts

It's very useful to prevent to create mistake partitions like ID. Also it can 
protect the NameNode from mass RPC calls of creating.

Data source table also needs similar limitation.

  was:
Dynamic partition in Hive table has some restrictions to limit the max number 
of partitions. See 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-DynamicPartitionInserts

It's very useful to prevent to create mistake partitions like ID. Also it can 
protect the NameNode from mass RPC calls of creating.


> Add a parameter to limit the number of dynamic partitions for data source 
> table
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-29166
>                 URL: https://issues.apache.org/jira/browse/SPARK-29166
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.4, 3.0.0
>            Reporter: Lantao Jin
>            Priority: Major
>
> Dynamic partition in Hive table has some restrictions to limit the max number 
> of partitions. See 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-DynamicPartitionInserts
> It's very useful to prevent to create mistake partitions like ID. Also it can 
> protect the NameNode from mass RPC calls of creating.
> Data source table also needs similar limitation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to