Copilot commented on code in PR #4269:
URL: https://github.com/apache/flink-cdc/pull/4269#discussion_r2791616192
##########
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-iceberg/src/main/java/org/apache/flink/cdc/connectors/iceberg/sink/v2/IcebergCommitter.java:
##########
@@ -117,15 +148,47 @@ private void commit(List<WriteResultWrapper>
writeResultWrappers) {
if (deleteFiles.isEmpty()) {
AppendFiles append = table.newAppend();
dataFiles.forEach(append::appendFile);
- append.commit();
+ commitOperation(append, newFlinkJobId, operatorId,
checkpointId);
} else {
RowDelta delta = table.newRowDelta();
dataFiles.forEach(delta::addRows);
deleteFiles.forEach(delta::addDeletes);
- delta.commit();
+ commitOperation(delta, newFlinkJobId, operatorId,
checkpointId);
+ }
+ }
+ }
+ }
+
+ private static long getMaxCommittedCheckpointId(
+ Iterable<Snapshot> ancestors, String flinkJobId, String
operatorId) {
+ long lastCommittedCheckpointId = INITIAL_CHECKPOINT_ID;
+
+ for (Snapshot ancestor : ancestors) {
+ Map<String, String> summary = ancestor.summary();
+ String snapshotFlinkJobId = summary.get(SinkUtil.FLINK_JOB_ID);
+ String snapshotOperatorId = summary.get(SinkUtil.OPERATOR_ID);
+ if (flinkJobId.equals(snapshotFlinkJobId)
+ && (snapshotOperatorId == null ||
snapshotOperatorId.equals(operatorId))) {
+ String value =
summary.get(SinkUtil.MAX_COMMITTED_CHECKPOINT_ID);
+ if (value != null) {
+ lastCommittedCheckpointId = Long.parseLong(value);
+ break;
}
}
}
+
+ return lastCommittedCheckpointId;
+ }
+
+ private static void commitOperation(
+ SnapshotUpdate<?> operation,
+ String newFlinkJobId,
+ String operatorId,
+ long checkpointId) {
+ operation.set(SinkUtil.MAX_COMMITTED_CHECKPOINT_ID,
Long.toString(checkpointId));
+ operation.set(SinkUtil.FLINK_JOB_ID, newFlinkJobId);
+ operation.set(SinkUtil.OPERATOR_ID, operatorId);
+ operation.commit();
Review Comment:
PR description mentions storing the checkpoint id under the key
`flink-cdc-checkpoint-id`, but the implementation writes/reads Iceberg’s
`SinkUtil.MAX_COMMITTED_CHECKPOINT_ID` (and `FLINK_JOB_ID`/`OPERATOR_ID`) in
the snapshot summary. Please align the PR description (or the code) so users
know which snapshot summary keys to look for and so the change is accurately
documented.
##########
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-iceberg/src/main/java/org/apache/flink/cdc/connectors/iceberg/sink/v2/IcebergCommitter.java:
##########
@@ -117,15 +148,47 @@ private void commit(List<WriteResultWrapper>
writeResultWrappers) {
if (deleteFiles.isEmpty()) {
AppendFiles append = table.newAppend();
dataFiles.forEach(append::appendFile);
- append.commit();
+ commitOperation(append, newFlinkJobId, operatorId,
checkpointId);
} else {
RowDelta delta = table.newRowDelta();
dataFiles.forEach(delta::addRows);
deleteFiles.forEach(delta::addDeletes);
- delta.commit();
+ commitOperation(delta, newFlinkJobId, operatorId,
checkpointId);
+ }
+ }
+ }
+ }
+
+ private static long getMaxCommittedCheckpointId(
+ Iterable<Snapshot> ancestors, String flinkJobId, String
operatorId) {
+ long lastCommittedCheckpointId = INITIAL_CHECKPOINT_ID;
Review Comment:
`getMaxCommittedCheckpointId` initializes `lastCommittedCheckpointId` to
`INITIAL_CHECKPOINT_ID`. If the table already has snapshots but none with
matching `SinkUtil.FLINK_JOB_ID`/`OPERATOR_ID` (e.g., first run of this sink on
an existing table), the method will return `INITIAL_CHECKPOINT_ID` and the
idempotency check can incorrectly skip committing checkpoint 1, causing data
loss. Initialize to `INITIAL_CHECKPOINT_ID - 1` (or another sentinel smaller
than any real checkpoint) so that “no prior commit found” never matches a real
checkpoint id.
```suggestion
long lastCommittedCheckpointId = INITIAL_CHECKPOINT_ID - 1;
```
##########
flink-cdc-connect/flink-cdc-pipeline-connectors/flink-cdc-pipeline-connector-iceberg/src/main/java/org/apache/flink/cdc/connectors/iceberg/sink/v2/IcebergWriterStateSerializer.java:
##########
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.cdc.connectors.iceberg.sink.v2;
+
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+import org.apache.flink.core.memory.DataInputDeserializer;
+import org.apache.flink.core.memory.DataOutputSerializer;
+
+import java.io.IOException;
+
+/** A {@link IcebergWriterStateSerializer} for {@link IcebergWriterState}. */
+public class IcebergWriterStateSerializer implements
SimpleVersionedSerializer<IcebergWriterState> {
+
+ private static final int VERSION = 0;
+
+ private static final ThreadLocal<DataOutputSerializer> SERIALIZER_CACHE =
+ ThreadLocal.withInitial(() -> new DataOutputSerializer(64));
+
+ @Override
+ public int getVersion() {
+ return VERSION;
+ }
+
+ @Override
+ public byte[] serialize(IcebergWriterState icebergWriterState) throws
IOException {
+ final DataOutputSerializer out = SERIALIZER_CACHE.get();
+ out.writeUTF(icebergWriterState.getJobId());
+ out.writeUTF(icebergWriterState.getOperatorId());
+ final byte[] result = out.getCopyOfBuffer();
+ out.clear();
+ return result;
+ }
+
+ @Override
+ public IcebergWriterState deserialize(int version, byte[] serialized)
throws IOException {
Review Comment:
`deserialize(int version, ...)` ignores the provided `version` and will
attempt to read the payload even if the version is unexpected/corrupt. For
`SimpleVersionedSerializer`, it’s safer to explicitly handle supported versions
(e.g., `if (version == VERSION) ... else throw new IOException("Unknown
version: " + version)`) to avoid silent state corruption / confusing failures
during upgrades.
```suggestion
public IcebergWriterState deserialize(int version, byte[] serialized)
throws IOException {
if (version != VERSION) {
throw new IOException("Unknown version: " + version);
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]