wypoon commented on code in PR #7744:
URL: https://github.com/apache/iceberg/pull/7744#discussion_r1211945426
##########
spark/v3.3/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestMigrateTableProcedure.java:
##########
@@ -184,4 +184,21 @@ public void testInvalidMigrateCases() {
"Cannot handle an empty identifier",
() -> sql("CALL %s.system.migrate('')", catalogName));
}
+
+ @Test
+ public void testMigratePartitionWithSpecialCharacter() throws IOException {
+ Assume.assumeTrue(catalogName.equals("spark_catalog"));
+ String location = temp.newFolder().toString();
+ sql(
+ "CREATE TABLE %s (id bigint NOT NULL, data string) USING parquet "
+ + "PARTITIONED BY (data) LOCATION '%s'",
Review Comment:
I think Spark uses a `Map<String, String>` to encode the partition values to
pass them to `TableMigrationUtil` because it doesn't have better information
than that.
https://github.com/apache/iceberg/blob/8858f1cfd3d1b0bd9b2d4230d1975a48d8bea1a8/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/SparkTableUtil.java#L173-L192
Spark's SessionCatalog returns a `Seq` of
`org.apache.spark.sql.catalyst.catalog.CatalogTablePartition` which contains a
`Map[String, String]`. So this is what `SparkTableUtil` has to work with.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]