junmuz commented on code in PR #4137:
URL: https://github.com/apache/flink-cdc/pull/4137#discussion_r2603776922


##########
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-mysql/src/main/java/org/apache/flink/cdc/connectors/mysql/source/MySqlDataSourceOptions.java:
##########
@@ -330,4 +330,12 @@ public class MySqlDataSourceOptions {
                     .defaultValue(false)
                     .withDescription(
                             "Whether to skip backfill in snapshot reading 
phase. If backfill is skipped, changes on captured tables during snapshot phase 
will be consumed later in change log reading phase instead of being merged into 
the snapshot.WARNING: Skipping backfill might lead to data inconsistency 
because some change log events happened within the snapshot phase might be 
replayed (only at-least-once semantic is promised). For example updating an 
already updated value in snapshot, or deleting an already deleted entry in 
snapshot. These replayed change log events should be handled specially.");
+
+    @Experimental
+    public static final ConfigOption<Boolean> 
SCAN_READ_CHANGELOG_AS_APPEND_ONLY_ENABLED =
+            ConfigOptions.key("scan.read-changelog-as-append-only.enabled")

Review Comment:
   @lvyanquan On a second thought, one should have a dedicated Flink instance 
for creating append only tables. Having a common configuration 
`scan.read-changelog-as-append-only.enabled` seems to better as it brings 
consistency with the[ 
mysql-cdc-connector.](https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/connectors/flink-sources/mysql-cdc/)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to