youngxinler commented on code in PR #6886:
URL: https://github.com/apache/iceberg/pull/6886#discussion_r1172167494


##########
spark/v3.3/spark/src/test/java/org/apache/iceberg/spark/sql/TestCreateTable.java:
##########
@@ -341,4 +343,42 @@ public void 
testDowngradeTableToFormatV1ThroughTablePropertyFails() {
         "Cannot downgrade v2 table to v1",
         () -> sql("ALTER TABLE %s SET TBLPROPERTIES ('format-version'='1')", 
tableName));
   }
+
+  @Test
+  public void testCatalogSpecificWarehouseLocation() throws Exception {
+    File warehouseLocation = temp.newFolder(catalogName);
+    Assert.assertTrue(warehouseLocation.delete());
+    String location = "file:" + warehouseLocation.getAbsolutePath();
+    spark.conf().set("spark.sql.catalog." + catalogName + ".warehouse", 
location);
+    String testNameSpace =
+        (catalogName.equals("spark_catalog") ? "" : catalogName + ".") + 
"default1";
+    String testTableName = testNameSpace + ".table";
+    TableIdentifier newTableIdentifier = TableIdentifier.of("default1", 
"table");
+
+    sql("CREATE NAMESPACE IF NOT EXISTS %s", testNameSpace);
+    sql("CREATE TABLE %s " + "(id BIGINT NOT NULL, data STRING) " + "USING 
iceberg", testTableName);
+
+    Table table = Spark3Util.loadIcebergCatalog(spark, 
catalogName).loadTable(newTableIdentifier);
+    if ("spark_catalog".equals(catalogName)) {
+      
Assert.assertTrue(table.location().startsWith(spark.sqlContext().conf().warehousePath()));
+    } else {
+      Assert.assertTrue(table.location().startsWith(location));
+    }
+    sql("DROP TABLE IF EXISTS %s", testTableName);
+    sql("DROP NAMESPACE IF EXISTS %s", testNameSpace);
+  }
+
+  @Test
+  public void testCatalogWithSparkSqlWarehouseDir() {
+    spark.conf().unset("spark.sql.catalog." + catalogName + ".warehouse");

Review Comment:
   Thanks! I am also troubled by this problem, but it is not supported to start 
two SparkContexts in the same JVM, SparkSession getOrCreate method will return 
the existing spark instance. 
   So this is why I rewrite the code and change this The reason why the test 
code is placed in a newly created `TestSparkWarehouseLocation`, because if it 
is placed in `TestCreateTable` to reset in different catalog warehouse config, 
once it fails, it will cause unit tests to affect each other, and pre-control 
Catalog state changes and recovery will complicate the overall 
`TestCreateTable` code, and these changes are unnecessary for other tests. The 
latest changes have been committed, what do you think?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to