wwj6591812 commented on code in PR #5715: URL: https://github.com/apache/paimon/pull/5715#discussion_r2141672321
########## paimon-flink/paimon-flink-common/src/main/java/org/apache/paimon/flink/utils/TableScanUtils.java: ########## @@ -45,14 +50,22 @@ public static void streamingReadingValidate(Table table) { } }; if (table.primaryKeys().size() > 0 && mergeEngineDesc.containsKey(mergeEngine)) { - if (options.changelogProducer() == CoreOptions.ChangelogProducer.NONE) { + if (coreOptions.changelogProducer() == CoreOptions.ChangelogProducer.NONE) { throw new RuntimeException( mergeEngineDesc.get(mergeEngine) + " streaming reading is not supported. You can use " + "'lookup' or 'full-compaction' changelog producer to support streaming reading. " + "('input' changelog producer is also supported, but only returns input records.)"); } } + + if (options.containsKey(SCAN_DEDICATED_SPLIT_GENERATION.key()) + && parseBoolean(options.get(SCAN_DEDICATED_SPLIT_GENERATION.key()))) { Review Comment: Thanks. Change `Map<String, String> options = table.options();` to `Options options = Options.fromMap(table.options());` Then cut the parseBoolean statement. ########## paimon-flink/paimon-flink-common/src/main/java/org/apache/paimon/flink/source/operator/ReadOperator.java: ########## @@ -130,6 +137,11 @@ public void processElement(StreamRecord<Split> record) throws Exception { reuseRecord.replace(nestedProjectedRowData); } output.collect(reuseRecord); + + if (limit != null && numRecordsIn.getCount() == limit) { Review Comment: done, thx. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@paimon.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org