[ 
https://issues.apache.org/jira/browse/FLINK-23814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17408115#comment-17408115
 ] 

Hang Ruan commented on FLINK-23814:
-----------------------------------

Besides the normal situations as before, I add the logic to throw an exception 
when committing committables as follow to simulate the situation in 
https://issues.apache.org/jira/browse/FLINK-23896.
{code:java}
// KafkaCommitter.java

class KafkaCommitter implements Committer<KafkaCommittable> {

    ......

    private int commitTimes = 0;

    ......
    
    @Override
    public List<KafkaCommittable> commit(List<KafkaCommittable> committables) 
throws IOException {
        // throw an exception after a successful commit
        if(commitTimes > 0) {
            throw new RuntimeException("commit failed.");
        }
        commitTimes++;
        
        ......
    }
}{code}
I don't observe any unexpected problem. 

I think we could move the ticket to done.

> Test FLIP-143 KafkaSink
> -----------------------
>
>                 Key: FLINK-23814
>                 URL: https://issues.apache.org/jira/browse/FLINK-23814
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Connectors / Kafka
>            Reporter: Fabian Paul
>            Assignee: Hang Ruan
>            Priority: Blocker
>              Labels: release-testing
>             Fix For: 1.14.0
>
>
> The following scenarios are worthwhile to test
>  * Start simple job with None/At-least once delivery guarantee and write 
> records to kafka topic
>  * Start simple job with exactly-once delivery guarantee and write records to 
> kafka topic. The records should only be visible with a `read-committed` 
> consumer
>  * Stop a job with exactly-once delivery guarantee and restart it with 
> different parallelism (scale-down, scale-up)
>  * Restart/kill a taskmanager while writing in exactly-once mode



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to