[ 
https://issues.apache.org/jira/browse/HUDI-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17949233#comment-17949233
 ] 

Zhenqiu Huang commented on HUDI-8881:
-------------------------------------

The issue we observed is that Flink checkpoint completed before write failure 
send to JM. Even hudi commit failure and rollout, but kafka offset have been 
committed to checkpoint. It will skip data after recovering from failure.

> Potential dataloss in Flink hudi sink
> -------------------------------------
>
>                 Key: HUDI-8881
>                 URL: https://issues.apache.org/jira/browse/HUDI-8881
>             Project: Apache Hudi
>          Issue Type: Improvement
>          Components: flink
>            Reporter: Zhenqiu Huang
>            Assignee: Zhenqiu Huang
>            Priority: Critical
>              Labels: pull-request-available
>
>  Let's say checkpoint A is completed, then coodinator start to hudi commit. 
> If there is a write error, then the table will be rolledback and flink job 
> will fail after the exception is thrown. When the job restart from failure, 
> what will happen? From my understanding, the time between kafka offset in 
> checkpoint A -1 and offset in checkpoint A will lose.
> we should check whether WriteMetadataEvent contains write failure once the 
> event is received, if there is any write failure rollback immediately and 
> throw exception to prevent checkpoint to complete. When job restart it can 
> still read from right kafka offset to proceed 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to