stevenzwu commented on code in PR #10777:
URL: https://github.com/apache/iceberg/pull/10777#discussion_r1690630724


##########
flink/v1.19/flink/src/test/java/org/apache/iceberg/flink/CatalogTestBase.java:
##########
@@ -118,26 +118,4 @@ protected String getFullQualifiedTableName(String 
tableName) {
   static String getURI(HiveConf conf) {
     return conf.get(HiveConf.ConfVars.METASTOREURIS.varname);
   }
-
-  static String toWithClause(Map<String, String> props) {

Review Comment:
   this is moved to `SqlBase`



##########
flink/v1.19/flink/src/test/java/org/apache/iceberg/flink/TestFlinkTableSink.java:
##########
@@ -169,39 +154,6 @@ public void testOverwriteTable() throws Exception {
         icebergTable, Lists.newArrayList(SimpleDataUtil.createRecord(2, "b")));
   }
 
-  @TestTemplate

Review Comment:
   moved these tests to the new `TestFlinkTableSinkExtended` which only tests 
on the dimensions of `isStreamingJob` boolean value. Currently every test run 
with 20+ combinations of parameters. Don't want to add range distribution test 
here to further slow down the test run time.



##########
flink/v1.19/flink/src/test/java/org/apache/iceberg/flink/sink/TestFlinkIcebergSink.java:
##########
@@ -141,102 +124,14 @@ public void testWriteRowData() throws Exception {
     SimpleDataUtil.assertTableRows(table, convertToRowData(rows));
   }
 
-  private void testWriteRow(TableSchema tableSchema, DistributionMode 
distributionMode)
-      throws Exception {
-    List<Row> rows = createRows("");
-    DataStream<Row> dataStream = env.addSource(createBoundedSource(rows), 
ROW_TYPE_INFO);
-
-    FlinkSink.forRow(dataStream, SimpleDataUtil.FLINK_SCHEMA)
-        .table(table)
-        .tableLoader(tableLoader)
-        .tableSchema(tableSchema)
-        .writeParallelism(parallelism)
-        .distributionMode(distributionMode)
-        .append();
-
-    // Execute the program.
-    env.execute("Test Iceberg DataStream.");
-
-    SimpleDataUtil.assertTableRows(table, convertToRowData(rows));
-  }
-
-  private int partitionFiles(String partition) throws IOException {
-    return SimpleDataUtil.partitionDataFiles(table, ImmutableMap.of("data", 
partition)).size();
-  }
-
   @TestTemplate
   public void testWriteRow() throws Exception {
-    testWriteRow(null, DistributionMode.NONE);
+    testWriteRow(parallelism, null, DistributionMode.NONE);
   }
 
   @TestTemplate
   public void testWriteRowWithTableSchema() throws Exception {
-    testWriteRow(SimpleDataUtil.FLINK_SCHEMA, DistributionMode.NONE);
-  }
-
-  @TestTemplate
-  public void testJobNoneDistributeMode() throws Exception {

Review Comment:
   moved all tests related to distribution mode to 
`TestFlinkIcebergSinkDistributionMode`. we don't need to test different file 
formats (Avro, Orc, Parquet) for distribution mode.



##########
flink/v1.19/flink/src/test/java/org/apache/iceberg/flink/TestBase.java:
##########
@@ -41,7 +40,7 @@
 import org.junit.jupiter.api.extension.RegisterExtension;
 import org.junit.jupiter.api.io.TempDir;
 
-public abstract class TestBase extends TestBaseUtils {

Review Comment:
   `TestBaseUtils` seems like a pure util class with static methods only



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to