LantaoJin commented on a change in pull request #25840: [SPARK-29166][SQL] Add
parameters to limit the number of dynamic partitions for data source table
URL: https://github.com/apache/spark/pull/25840#discussion_r326968291
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/SQLHadoopMapReduceCommitProtocol.scala
##########
@@ -63,7 +70,29 @@ class SQLHadoopMapReduceCommitProtocol(
committer = ctor.newInstance()
}
}
+ totalPartitions = new AtomicInteger(0)
logInfo(s"Using output committer class
${committer.getClass.getCanonicalName}")
committer
}
+
+ override def newTaskTempFile(
+ taskContext: TaskAttemptContext, dir: Option[String], ext: String):
String = {
+ val path = super.newTaskTempFile(taskContext, dir, ext)
+ totalPartitions.incrementAndGet()
+ if (dynamicPartitionOverwrite) {
+ if (totalPartitions.get > maxDynamicPartitions) {
Review comment:
> it is the max number of partitions a data source can have at any given time
I don't think so. We should keep the same syntax with Hive. Notice that
**hive.exec.max.dynamic.partitions** (default value being 1000) is the total
number of dynamic partitions could be created by one DML.
https://cwiki.apache.org/confluence/display/Hive/Tutorial#Tutorial-Dynamic-PartitionInsert
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]