danny0405 commented on issue #18051:
URL: https://github.com/apache/hudi/issues/18051#issuecomment-3822224921

   Let's hold on for the fix, the current state of the throwing behavior is 
engine specific, in Spark streaming, there is a option named 
`hoodie.datasource.write.streaming.ignore.failed.batch`, which is checked right 
before the instant commit, it works because the txn is sync while the Fink does 
not work because ckp is async.
   
   and there are two ways to fix the issue:
   
   1. collect the write failures after each file write and throw;
   2. throws directly in the write handle.
   
   We also need to consider how to standardize this behavior(like unifying the 
namings), opition `#1` got the minimum changes but I think `#2` maybe more 
proper fix.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to