MaxGekk commented on a change in pull request #30538:
URL: https://github.com/apache/spark/pull/30538#discussion_r553534314



##########
File path: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
##########
@@ -133,6 +133,7 @@ case class InsertIntoHiveTable(
     val numDynamicPartitions = partition.values.count(_.isEmpty)
     val numStaticPartitions = partition.values.count(_.nonEmpty)
     val partitionSpec = partition.map {
+      case (key, Some(null)) => key -> 
ExternalCatalogUtils.DEFAULT_PARTITION_NAME

Review comment:
       The test:
   ```scala
     test("SPARK-33591: '' as a partition value") {
       val t = "part_table"
       withTable(t) {
         sql(s"CREATE TABLE $t (col1 INT, p1 STRING) $defaultUsing PARTITIONED 
BY (p1)")
         sql(s"INSERT INTO TABLE $t PARTITION (p1 = '') SELECT 0")
       }
     }
   ```
   fails with:
   ```
   Partition spec is invalid. The spec ([p1=Some()]) contains an empty 
partition column value
   org.apache.spark.sql.AnalysisException: Partition spec is invalid. The spec 
([p1=Some()]) contains an empty partition column value
        at 
org.apache.spark.sql.execution.datasources.PreprocessTableInsertion$.org$apache$spark$sql$execution$datasources$PreprocessTableInsertion$$preprocess(rules.scala:412)
   ```
   at 
https://github.com/apache/spark/blob/d730b6bdaa92f2ca19cc8852ac58035e28d47a4f/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala#L408-L409




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to