[ 
https://issues.apache.org/jira/browse/FLINK-2624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15029881#comment-15029881
 ] 

ASF GitHub Bot commented on FLINK-2624:
---------------------------------------

Github user mxm commented on the pull request:

    https://github.com/apache/flink/pull/1243#issuecomment-160141917
  
    This currently only works when checkpointing is turned on. If checkpointing 
is turned off, not only the number of queued messages will grow unbounded, but 
also the messages will never be acknowledged. In terms of RabbitMQ that means 
that messages will be kept in the broker forever and will be redelivered once 
another consumer connects to the queue again.
    
    In terms of exactly once, I'm not 100% sure how we can support it. I think 
we have to default to per-message acknowledgments (instead of acknowledging all 
ids <= the last processed id). Only that way we can make sure that upon failure 
we acknowledge the right messages. The reason for that is that distribution of 
the messages may not be the same after a redeployment of a failed job, i.e. 
consumers may receive messages that have not seen the message beforehand, thus 
a message may be processed more than once.


> RabbitMQ source / sink should participate in checkpointing
> ----------------------------------------------------------
>
>                 Key: FLINK-2624
>                 URL: https://issues.apache.org/jira/browse/FLINK-2624
>             Project: Flink
>          Issue Type: Bug
>          Components: Streaming Connectors
>    Affects Versions: 0.10.0
>            Reporter: Stephan Ewen
>            Assignee: Hilmi Yildirim
>
> The RabbitMQ connector does not offer any fault tolerance guarantees right 
> now, because it does not participate in the checkpointing.
> We should integrate it in a similar was as the {{FlinkKafkaConsumer}} is 
> integrated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to