leonardBang commented on code in PR #3856:
URL: https://github.com/apache/flink-cdc/pull/3856#discussion_r1963519196
##########
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-mysql/src/main/java/org/apache/flink/cdc/connectors/mysql/source/MySqlDataSourceOptions.java:
##########
@@ -313,4 +313,12 @@ public class MySqlDataSourceOptions {
.defaultValue(true)
.withDescription(
"Whether to use legacy json format. The default
value is true, which means there is no whitespace before value and after comma
in json format.");
+
+ @Experimental
+ public static final ConfigOption<Boolean>
SCAN_INCREMENTAL_SNAPSHOT_ASSIGN_ENDING_CHUNK_FIRST =
+
ConfigOptions.key("scan.incremental.snapshot.assign-ending-chunk-first.enabled")
Review Comment:
How about ```suggestion
ConfigOptions.key("scan.incremental.snapshot.boundary-chunk-first.enabled")
```? IIUC, this options is used to control the boundary chunks should be
assigned first or not?
##########
flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/assigner/splitter/JdbcSourceChunkSplitter.java:
##########
@@ -470,7 +470,13 @@ private List<ChunkRange> splitEvenlySizedChunks(
}
}
// add the ending split
- splits.add(ChunkRange.of(chunkStart, null));
+ // assign ending split first, both the largest and smallest unbounded
chunks are completed
+ // in the first two splits
+ if (sourceConfig.isAssignEndingChunkFirst()) {
+ splits.add(0, ChunkRange.of(chunkStart, null));
Review Comment:
Could you consider ` System.arraycopy` for performance ? Currently we need
move n-1 elements.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]