gaborgsomogyi commented on issue #25618: [SPARK-28908][SS]Implement Kafka EOS 
sink for Structured Streaming
URL: https://github.com/apache/spark/pull/25618#issuecomment-526282547
 
 
   By reading the doc without super deep understanding I've found this in the 
caveats section:
   ```
   If job failed before ResumeTransaction more than 60 seconds, the default 
value
   ofconfiguration transaction.timeout.ms, data send to Kafka cluster will be 
discarded
   and lead todata loss.So We set transaction.timeout.ms to 900000, the default
   value of max.transaction.timeout.msin Kafka cluster, to reduce the risk of 
data loss
   if user not defined
   ```
   The `to reduce the risk of data loss` part disturbs me a bit, is it 
exactly-once then or not?
   @HeartSaVioR AFAIK you've proposed exactly once SPIP before but there were 
concerns.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to