pnowojski commented on a change in pull request #7010: [FLINK-10353][kafka] 
Support change of transactional semantics in Kaf…
URL: https://github.com/apache/flink/pull/7010#discussion_r231069235
 
 

 ##########
 File path: 
flink-connectors/flink-connector-kafka-0.11/src/test/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011ITCase.java
 ##########
 @@ -566,6 +568,76 @@ public void testRunOutOfProducersInThePool() throws 
Exception {
                deleteTestTopic(topic);
        }
 
+       @Test
+       public void testMigrateFromAtLeastOnceToExactlyOnce() throws Exception {
+               String topic = "testMigrateFromAtLeastOnceToExactlyOnce";
+
+               OperatorSubtaskState producerSnapshot;
+               try (OneInputStreamOperatorTestHarness<Integer, Object> 
testHarness = createTestHarness(topic, AT_LEAST_ONCE)) {
+                       testHarness.setup();
+                       testHarness.open();
+                       testHarness.processElement(42, 0);
+                       testHarness.snapshot(0, 1);
+                       testHarness.processElement(43, 2);
+                       testHarness.notifyOfCompletedCheckpoint(0);
+                       producerSnapshot = testHarness.snapshot(1, 3);
+                       testHarness.processElement(44, 4);
+               }
+
+               try (OneInputStreamOperatorTestHarness<Integer, Object> 
testHarness = createTestHarness(topic, EXACTLY_ONCE)) {
+                       testHarness.setup();
+                       // restore from snapshot, all records until here should 
be persisted
+                       testHarness.initializeState(producerSnapshot);
+                       testHarness.open();
+
+                       // write and commit more records
+                       testHarness.processElement(44, 7);
+                       testHarness.snapshot(2, 8);
+                       testHarness.processElement(45, 9);
+               }
+
+               //now we should have:
+               // - records 42, 43, 44 in directly flushed writes from 
at-least-once
+               // - aborted transactions with records 44 and 45
+               assertExactlyOnceForTopic(createProperties(), topic, 0, 
Arrays.asList(42, 43, 44), 30_000L);
+               deleteTestTopic(topic);
+       }
+
+       @Test
+       public void testMigrateFromAtExactlyOnceToAtLeastOnce() throws 
Exception {
+               String topic = "testMigrateFromExactlyOnceToAtLeastOnce";
+
+               OperatorSubtaskState producerSnapshot;
+               try (OneInputStreamOperatorTestHarness<Integer, Object> 
testHarness = createTestHarness(topic, EXACTLY_ONCE)) {
 
 Review comment:
   you could provide expected result or do the assertion outside of the 
deduplicated method. I was thinking about the latter one, but maybe the first 
one is better. Either are fine for me.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to