hililiwei commented on a change in pull request #3549:
URL: https://github.com/apache/iceberg/pull/3549#discussion_r750908523



##########
File path: 
spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/source/TestRuntimeFiltering.java
##########
@@ -44,10 +44,10 @@
 import static org.apache.spark.sql.functions.date_add;
 import static org.apache.spark.sql.functions.expr;
 
-public class TestRuntimeFiltering extends SparkCatalogTestBase {
+public class TestRuntimeFiltering extends SparkSpecifyCatalogTestBase {
 
-  public TestRuntimeFiltering(String catalogName, String implementation, 
Map<String, String> config) {
-    super(catalogName, implementation, config);
+  public TestRuntimeFiltering() {
+    super(SparkCatalogType.SPARK_CATALOG);

Review comment:
       
https://github.com/apache/iceberg/blob/f6fdeb0e63f3058eabf860bd893bea24f6649b4c/spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java#L54-L76
   
   Can this be achieved by modifying the code as follows?
   
   ```
   
     @Before
     public void checkMetastoreAndSpark() {
       if (SparkTestBase.spark == null) {
         synchronized (SparkTestBase.class) {
           if (SparkTestBase.spark == null) {
             if (StringUtils.equals(catalogName, 
SPARK_CATALOG_HADOOP.catalogName())) {
               startSpark();
             } else {
               startMetastoreAndSpark();
             }
           }
         }
       }
     }
   
     public static void startSpark() {
       SparkTestBase.spark = SparkSession.builder()
           .master("local[2]")
           .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), "dynamic")
           .getOrCreate();
     }
   
     public static void startMetastoreAndSpark() {
       SparkTestBase.metastore = new TestHiveMetastore();
       metastore.start();
       SparkTestBase.hiveConf = metastore.hiveConf();
   
       SparkTestBase.spark = SparkSession.builder()
           .master("local[2]")
           .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), "dynamic")
           .config("spark.hadoop." + METASTOREURIS.varname, 
hiveConf.get(METASTOREURIS.varname))
           .enableHiveSupport()
           .getOrCreate();
   
       SparkTestBase.catalog = (HiveCatalog)
           CatalogUtil.loadCatalog(HiveCatalog.class.getName(), "hive", 
ImmutableMap.of(), hiveConf);
   
       try {
         catalog.createNamespace(Namespace.of("default"));
       } catch (AlreadyExistsException ignored) {
         // the default namespace already exists. ignore the create error
       }
     }
   
   ```
   
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to