[ 
https://issues.apache.org/jira/browse/FLINK-26416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17503566#comment-17503566
 ] 

Liu commented on FLINK-26416:
-----------------------------

[~fpaul], thanks. I have test the three cases and all of them passed. I 
generate data in source and write them to files with FileSink. The result is 
the same as the source.

The last two cases are easy to test. For the first case, I assign uids for 
operators and rebuild my java package in flink 1.15 for that the interface is 
not compatible.

> Release Testing: Sink V2 sanity checks
> --------------------------------------
>
>                 Key: FLINK-26416
>                 URL: https://issues.apache.org/jira/browse/FLINK-26416
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Common
>    Affects Versions: 1.15.0
>            Reporter: Fabian Paul
>            Assignee: Liu
>            Priority: Blocker
>              Labels: release-testing
>             Fix For: 1.15.0
>
>
> With the introduction of Sink V2, the operator model of the sink changed 
> slightly therefore it makes sense to test different upgrade/sanity scenarios.
>  
> You can take any of the existing Sinks in the project. I would recommend the 
> FileSink.
>  
>  # Run a job with Flink 1.14 and take a savepoint and try to restore and 
> resume with 1.15
>  # Run a job with Flink 1.15 and take a savepoint and try to restore and 
> resume with 1.15
>  # Run a bounded job with Flink 1.15
>  
> In all cases, please verify that all records have been written at the end of 
> the scenario and there are no duplicates.
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to