Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/15995#discussion_r91855259 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala --- @@ -192,19 +200,13 @@ case class DataSourceAnalysis(conf: CatalystConf) extends Rule[LogicalPlan] { var initialMatchingPartitions: Seq[TablePartitionSpec] = Nil var customPartitionLocations: Map[TablePartitionSpec, String] = Map.empty - val staticPartitionKeys: TablePartitionSpec = if (overwrite.enabled) { - overwrite.staticPartitionKeys.map { case (k, v) => - (partitionSchema.map(_.name).find(_.equalsIgnoreCase(k)).get, v) - } - } else { - Map.empty - } + val staticPartitions = parts.filter(_._2.nonEmpty).map { case (k, v) => k -> v.get } --- End diff -- The column names in partition spec are already normalized in `PreprocessTableInsertion` rule, we don't need to consider case sensitivity here. And the `if-else` is not needed, because: 1. `staticPartitions` is used to get `matchingPartitions` in [this line](https://github.com/apache/spark/pull/15995/files#diff-d99813bd5bbc18277e4090475e4944cfR208), and the `matchingPartitions` is used to decided which partitions need to be [added to metastore](https://github.com/apache/spark/pull/15995/files#diff-d99813bd5bbc18277e4090475e4944cfR219). Previously if `overwrite` is false, we will get all partitions as `matchingPartitions`, and issue a lot of unnecessary `ADD PARTITION` calls. After removing the `if-else`, it's fixed. 2. After we pass `staticPartitions` to `InsertIntoHadoopFsRelationCommand`, it will be used only with `OverWriter` mode, so the `if-else` is unnecessary.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org