liuxiao2shf commented on code in PR #3619:
URL: https://github.com/apache/flink-cdc/pull/3619#discussion_r1818709622


##########
flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/assigner/SnapshotSplitAssigner.java:
##########
@@ -397,6 +495,30 @@ && allSnapshotSplitsFinished()) {
             }
             LOG.info("Snapshot split assigner is turn into finished status.");
         }
+
+        if (splitFinishedCheckpointIds != null && 
!splitFinishedCheckpointIds.isEmpty()) {
+            Iterator<Map.Entry<String, Long>> iterator =
+                    splitFinishedCheckpointIds.entrySet().iterator();
+            while (iterator.hasNext()) {
+                Map.Entry<String, Long> splitFinishedCheckpointId = 
iterator.next();
+                String splitId = splitFinishedCheckpointId.getKey();
+                Long splitCheckpointId = splitFinishedCheckpointId.getValue();
+                if (splitCheckpointId != UNDEFINED_CHECKPOINT_ID
+                        && checkpointId >= splitCheckpointId) {
+                    // record table-level splits metrics
+                    TableId tableId = SnapshotSplit.parseTableId(splitId);
+                    
enumeratorMetrics.getTableMetrics(tableId).addFinishedSplit(splitId);
+                    finishedSplits.put(
+                            tableId,
+                            
enumeratorMetrics.getTableMetrics(tableId).getFinishedSplitIds());
+                    iterator.remove();
+                }
+            }
+            LOG.info(

Review Comment:
   I have already dealt with it



##########
flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/assigner/HybridSplitAssigner.java:
##########
@@ -137,6 +161,7 @@ public Optional<SourceSplitBase> getNext() {
                 // assigning the stream split. Otherwise, records emitted from 
stream split
                 // might be out-of-order in terms of same primary key with 
snapshot splits.
                 isStreamSplitAssigned = true;
+                enumeratorMetrics.enterStreamReading();

Review Comment:
   I have already dealt with it



##########
flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/assigner/HybridSplitAssigner.java:
##########
@@ -137,6 +161,7 @@ public Optional<SourceSplitBase> getNext() {
                 // assigning the stream split. Otherwise, records emitted from 
stream split
                 // might be out-of-order in terms of same primary key with 
snapshot splits.
                 isStreamSplitAssigned = true;
+                enumeratorMetrics.enterStreamReading();

Review Comment:
   I have already dealt with it



##########
flink-cdc-connect/flink-cdc-source-connectors/flink-cdc-base/src/main/java/org/apache/flink/cdc/connectors/base/source/assigner/SnapshotSplitAssigner.java:
##########
@@ -397,6 +495,30 @@ && allSnapshotSplitsFinished()) {
             }
             LOG.info("Snapshot split assigner is turn into finished status.");
         }
+
+        if (splitFinishedCheckpointIds != null && 
!splitFinishedCheckpointIds.isEmpty()) {
+            Iterator<Map.Entry<String, Long>> iterator =
+                    splitFinishedCheckpointIds.entrySet().iterator();
+            while (iterator.hasNext()) {
+                Map.Entry<String, Long> splitFinishedCheckpointId = 
iterator.next();
+                String splitId = splitFinishedCheckpointId.getKey();
+                Long splitCheckpointId = splitFinishedCheckpointId.getValue();
+                if (splitCheckpointId != UNDEFINED_CHECKPOINT_ID
+                        && checkpointId >= splitCheckpointId) {
+                    // record table-level splits metrics
+                    TableId tableId = SnapshotSplit.parseTableId(splitId);
+                    
enumeratorMetrics.getTableMetrics(tableId).addFinishedSplit(splitId);
+                    finishedSplits.put(
+                            tableId,
+                            
enumeratorMetrics.getTableMetrics(tableId).getFinishedSplitIds());
+                    iterator.remove();
+                }
+            }
+            LOG.info(

Review Comment:
   I have already dealt with it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to