kumarpritam863 commented on issue #13763:
URL: https://github.com/apache/iceberg/issues/13763#issuecomment-3167998809

   @hguercan I do not think this can happen with sink because of the following 
two reasons:
   1. File metadata commit and offset commit are tied up in a transaction. So 
either file are written along with the offset commit or nothing will happen. 
Even the coordinator before committing metadata checks for the control offsets 
being written, even if coordinator commits metadata and could not commit it's 
kafka offset then in that case it will just sips those as it would compare and 
it will find that it has already written those to the table.
   2. Iceberg does not allow this and you initial spark query would itself fail 
giving error.
   3. If you see this thread on 
[slack](https://apache-iceberg.slack.com/archives/C025PH0G1D4/p1698676018510089),
 they are writing data with spark streaming and still facing this issue. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to