MoanaStirling opened a new issue, #1731:
URL: https://github.com/apache/camel-kafka-connector/issues/1731

   It appears that the SQS Source when configured to delete after read delete 
messages immediately after reading, before the message has been written to 
Kafka. Is this intended? From local testing it appears that if the connector is 
able to read from SQS but unable to produce to Kafka, and the Kafka Connect 
worker crashes, then the message is permanently lost. I've looked at disabling 
delete after read and enabling idempotency as a potential solution to this, but 
currently am using FIFO queues which will not move past messages until they 
deleted. And ideally I would want these message deleted after successful 
processing regardless.
   
   I am currently using 4.8.0, not sure if subsequent releases have fixed this, 
but I haven't seen any release notes that indicate this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to