pnowojski commented on a change in pull request #7010: [FLINK-10353][kafka]
Support change of transactional semantics in Kaf…
URL: https://github.com/apache/flink/pull/7010#discussion_r231059615
##########
File path:
flink-connectors/flink-connector-kafka-0.11/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011ITCase.java
##########
@@ -566,6 +568,76 @@ public void testRunOutOfProducersInThePool() throws
Exception {
deleteTestTopic(topic);
}
+ @Test
+ public void testMigrateFromAtLeastOnceToExactlyOnce() throws Exception {
+ String topic = "testMigrateFromAtLeastOnceToExactlyOnce";
+
+ OperatorSubtaskState producerSnapshot;
+ try (OneInputStreamOperatorTestHarness<Integer, Object>
testHarness = createTestHarness(topic, AT_LEAST_ONCE)) {
+ testHarness.setup();
+ testHarness.open();
+ testHarness.processElement(42, 0);
+ testHarness.snapshot(0, 1);
+ testHarness.processElement(43, 2);
+ testHarness.notifyOfCompletedCheckpoint(0);
+ producerSnapshot = testHarness.snapshot(1, 3);
+ testHarness.processElement(44, 4);
+ }
+
+ try (OneInputStreamOperatorTestHarness<Integer, Object>
testHarness = createTestHarness(topic, EXACTLY_ONCE)) {
+ testHarness.setup();
+ // restore from snapshot, all records until here should
be persisted
+ testHarness.initializeState(producerSnapshot);
+ testHarness.open();
+
+ // write and commit more records
+ testHarness.processElement(44, 7);
+ testHarness.snapshot(2, 8);
+ testHarness.processElement(45, 9);
+ }
+
+ //now we should have:
+ // - records 42, 43, 44 in directly flushed writes from
at-least-once
+ // - aborted transactions with records 44 and 45
+ assertExactlyOnceForTopic(createProperties(), topic, 0,
Arrays.asList(42, 43, 44), 30_000L);
+ deleteTestTopic(topic);
+ }
+
+ @Test
+ public void testMigrateFromAtExactlyOnceToAtLeastOnce() throws
Exception {
+ String topic = "testMigrateFromExactlyOnceToAtLeastOnce";
+
+ OperatorSubtaskState producerSnapshot;
+ try (OneInputStreamOperatorTestHarness<Integer, Object>
testHarness = createTestHarness(topic, EXACTLY_ONCE)) {
+ testHarness.setup();
+ testHarness.open();
+ testHarness.processElement(42, 0);
+ testHarness.snapshot(0, 1);
+ testHarness.processElement(43, 2);
+ testHarness.notifyOfCompletedCheckpoint(0);
+ producerSnapshot = testHarness.snapshot(1, 3);
+ testHarness.processElement(44, 4);
+ }
+
+ try (OneInputStreamOperatorTestHarness<Integer, Object>
testHarness = createTestHarness(topic, AT_LEAST_ONCE)) {
+ testHarness.setup();
+ // restore from snapshot
+ testHarness.initializeState(producerSnapshot);
+ testHarness.open();
+
+ // write and commit more records, after potentially
lingering transactions
+ testHarness.processElement(44, 7);
+ testHarness.snapshot(2, 8);
+ testHarness.processElement(45, 9);
+ }
+
+ //now we should have:
+ // - records 42 and 43 in committed transactions
+ // - aborted transactions with records 44 and 45
+ assertExactlyOnceForTopic(createProperties(), topic, 0,
Arrays.asList(42, 43, 44, 45), 30_000L);
Review comment:
Could you provide overloaded version of `assertExactlyOnceForTopic` with
default value for `long timeoutMillis = 30_000L` as separate commit? I know
that it was like that before, but I have only now realised how duplicated
(mostly by me) the magic constant `30_000L` is everywhere...
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services