viirya commented on a change in pull request #25306: [SPARK-28573][SQL] Convert 
InsertIntoTable(HiveTableRelation) to DataSource inserting for partitioned table
URL: https://github.com/apache/spark/pull/25306#discussion_r311371869
 
 

 ##########
 File path: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala
 ##########
 @@ -209,13 +211,16 @@ private[hive] class HiveMetastoreCatalog(sparkSession: 
SparkSession) extends Log
 
           val updatedTable = inferIfNeeded(relation, options, fileFormat, 
Option(fileIndex))
 
+          // Spark SQL's data source table now support static and dynamic 
partition insert. Source
+          // table converted from Hive table should always use dynamic.
+          val enableDynamicPartition = 
options.updated("partitionOverwriteMode", "dynamic")
 
 Review comment:
   I'm not sure about this part. Why it should always use dynamic? Don't we 
need to respect `spark.sql.sources.partitionOverwriteMode`?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to