C0urante commented on code in PR #14005:
URL: https://github.com/apache/kafka/pull/14005#discussion_r1265649203


##########
connect/mirror/src/test/java/org/apache/kafka/connect/mirror/integration/MirrorConnectorsIntegrationExactlyOnceTest.java:
##########
@@ -46,4 +51,49 @@ public void startClusters() throws Exception {
         super.startClusters();
     }
 
+    @Override
+    @Test
+    public void testReplication() throws Exception {
+        super.testReplication();
+
+        // Augment the base replication test case with some extra testing of 
the offset management
+        // API introduced in KIP-875
+        // We do this only when exactly-once support is enabled in order to 
avoid having to worry about
+        // zombie tasks producing duplicate records and/or writing stale 
offsets to the offsets topic
+
+        String backupTopic1 = remoteTopicName("test-topic-1", 
PRIMARY_CLUSTER_ALIAS);
+        String backupTopic2 = remoteTopicName("test-topic-2", 
PRIMARY_CLUSTER_ALIAS);
+
+        // Explicitly move back to offset 0
+        // Note that the connector treats the offset as the last-consumed 
offset,
+        // so it will start reading the topic partition from offset 1 when it 
resumes
+        alterMirrorMakerSourceConnectorOffsets(backup, n -> 0L, 
"test-topic-1");
+        // Reset the offsets for test-topic-2
+        resetSomeMirrorMakerSourceConnectorOffsets(backup, "test-topic-2");
+        resumeMirrorMakerConnectors(backup, MirrorSourceConnector.class);
+
+        int expectedRecordsTopic1 = NUM_RECORDS_PRODUCED + 
((NUM_RECORDS_PER_PARTITION - 1) * NUM_PARTITIONS);
+        assertEquals(expectedRecordsTopic1, 
backup.kafka().consume(expectedRecordsTopic1, RECORD_TRANSFER_DURATION_MS, 
backupTopic1).count(),
+                "Records were not re-replicated to backup cluster after 
altering offsets.");
+        int expectedRecordsTopic2 = NUM_RECORDS_PER_PARTITION * 2;
+        assertEquals(expectedRecordsTopic2, 
backup.kafka().consume(expectedRecordsTopic2, RECORD_TRANSFER_DURATION_MS, 
backupTopic2).count(),
+                "New topic was not re-replicated to backup cluster after 
altering offsets.");
+
+        @SuppressWarnings({"unchecked", "rawtypes"})
+        Class<? extends Connector>[] connectorsToReset = 
CONNECTOR_LIST.toArray(new Class[0]);
+        // Resetting the offsets for the heartbeat and checkpoint connectors 
doesn't have any effect
+        // on their behavior, but users may want to wipe offsets from them to 
prevent the offsets topic
+        // from growing infinitely. So, we include them in the list of 
connectors to reset as a sanity check

Review Comment:
   > Since the set of source partitions is limited here, shouldn't log 
compaction be good enough?
   
   It's hard to define "limited" in this sense, but if you've configured MM2 to 
replicate every non-internal topic from a large cluster, then you could easily 
end up with hundreds of thousands of unique source offsets. And then, if some 
topics are deleted from the source cluster and other topics are created, we 
could go beyond even that. I think it'd be nice to allow people to do some 
cleanup in cases like that.
   
   > If these offsets are never read back and actually used, why would users 
want to "undo" partial or complete resets?
   
   I tried to touch on this with my earlier point:
   
   > This is especially relevant since, although the offsets topic isn't public 
API, its contents are now public API (via the `GET /connectors/{name}/offsets 
endpoint`), and users may want to track the offsets for these connectors for 
monitoring purposes
   
   TL;DR: The contents of the offsets topic may become an additional point of 
observability for the connectors for people to discover, e.g., the total set of 
replicated topic partitions or consumer groups.
   
   > Overall, I think I'm more in favor of only allowing standard use cases 
with the offsets management REST APIs to reduce the potential number of 
footguns for users.
   
   I definitely agree with the general mentality here of reducing the API 
surface in order to minimize the potential for users to hurt themselves. But is 
there any actual risk in the specific case of publishing tombstones for 
arbitrary topic partitions?
   
   In general I'd still like it if we could provide a fairly flexible API for 
these connectors, but if that's too risky, one alternative could be to only 
permit tombstones with validated source partitions. This would still allow for 
cleanup (with no distinction between total and partial resets), but wouldn't 
support an "undo" for total/partial resets, or removal of garbage source 
partitions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to