MaxGekk commented on a change in pull request #30482: URL: https://github.com/apache/spark/pull/30482#discussion_r529558445
########## File path: sql/core/src/test/scala/org/apache/spark/sql/connector/AlterTablePartitionV2SQLSuite.scala ########## @@ -243,4 +243,22 @@ class AlterTablePartitionV2SQLSuite extends DatasourceV2SQLBase { assert(!partTable.partitionExists(expectedPartition)) } } + + test("SPARK-33529: handle __HIVE_DEFAULT_PARTITION__") { + val t = "testpart.ns1.ns2.tbl" + withTable(t) { + sql(s"CREATE TABLE $t (part0 string) USING foo PARTITIONED BY (part0)") + val partTable = catalog("testpart") + .asTableCatalog + .loadTable(Identifier.of(Array("ns1", "ns2"), "tbl")) + .asPartitionable + val expectedPartition = InternalRow.fromSeq(Seq[Any](null)) + assert(!partTable.partitionExists(expectedPartition)) + val partSpec = "PARTITION (part0 = '__HIVE_DEFAULT_PARTITION__')" Review comment: > It's more like a hive specific thing and we should let v2 implementation to decide ... It is already Spark specific thing too. Implementations don't see `'__HIVE_DEFAULT_PARTITION__'` at all because it is replaced by `null` at the analyzing phase. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org