zhangjun0x01 commented on a change in pull request #1936: URL: https://github.com/apache/iceberg/pull/1936#discussion_r560857549
########## File path: flink/src/test/java/org/apache/iceberg/flink/TestFlinkTableSource.java ########## @@ -685,4 +782,60 @@ public void testSqlParseError() { AssertHelpers.assertThrows("The NaN is not supported by flink now. ", NumberFormatException.class, () -> sql(sqlParseErrorLTE)); } + + /** + * The sql can be executed in both streaming and batch mode, in order to get the parallelism, we convert the flink + * Table to flink DataStream, so we only use streaming mode here. + * + * @throws TableNotExistException table not exist exception + */ + @Test + public void testInferedParallelism() throws TableNotExistException { + Assume.assumeTrue("The execute mode should be streaming mode", isStreamingJob); Review comment: I found that in this test method,I use the flink streaming mode,but it still enter the batch mode ([here](https://github.com/apache/iceberg/blob/master/flink/src/main/java/org/apache/iceberg/flink/source/FlinkSource.java#L200)), I check the code,found that `FlinkSource.Builder#build` mthod judge streaming mode or batch mode by the conf of ScanContext instead of flink conf. will this confuse users? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org