lvyanquan commented on code in PR #3656:
URL: https://github.com/apache/flink-cdc/pull/3656#discussion_r1808363691
##########
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/split/MySqlBinlogSplit.java:
##########
@@ -108,6 +108,16 @@ public boolean isCompletedSplit() {
return totalFinishedSplitSize == finishedSnapshotSplitInfos.size();
}
+ public String getTables() {
+ String tables;
+ if (tableSchemas != null) {
+ tables = tableSchemas.keySet().toString();
Review Comment:
If we iterate every time, it will have an impact on performance. Can we
assume that tableschemas will not change and we only need to build the tables
variable once.
##########
flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/reader/MySqlSourceReader.java:
##########
@@ -530,7 +530,11 @@ private void logCurrentBinlogOffsets(List<MySqlSplit>
splits, long checkpointId)
return;
}
BinlogOffset offset = split.asBinlogSplit().getStartingOffset();
- LOG.info("Binlog offset on checkpoint {}: {}", checkpointId,
offset);
+ LOG.info(
+ "Binlog offset for tables {} on checkpoint {}: {}",
+ split.asBinlogSplit().getTables(),
Review Comment:
If there are a large number of tables, it will result in logs being quite
long. Would you consider truncating them?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]