Not exactly. But we do pay an enormous amount of attention to our 
producer-Kafka-consumer subsystems. They are certainly mission-critical for us. 

Philip

> On Nov 28, 2013, at 4:12 PM, Steve Morin <steve.mo...@gmail.com> wrote:
> 
> Philip,
>  Do you do that at loggly?
> 
> Otis,
>  How was your retry code structured?  Have you open sourced it?
> 
>> On Nov 28, 2013, at 16:08, Philip O'Toole <phi...@loggly.com> wrote:
>> 
>> By FS I guess you mean file system. 
>> 
>> In that case, if one is that concerned, why not run a single Kafka broker on 
>> the same machine, and connect to it over localhost? And disable ZK mode too, 
>> perhaps. 
>> 
>> I may be missing something, but I never fully understand why people try 
>> really hard to build a stream-to-disk backup approach, when they might be 
>> able to couple tightly to Kafka, which, well, just streams to disk. 
>> 
>> Philip
>> 
>>> On Nov 28, 2013, at 3:58 PM, Otis Gospodnetic <otis.gospodne...@gmail.com> 
>>> wrote:
>>> 
>>> Hi,
>>> 
>>> We've done this at Sematext, where we use Kafka in all 3 products/services
>>> you see in my signature.  When we fail to push a message into Kafka we
>>> store it in the FS and from there we can process it later.
>>> 
>>> Otis
>>> --
>>> Performance Monitoring * Log Analytics * Search Analytics
>>> Solr & Elasticsearch Support * http://sematext.com/
>>> 
>>> 
>>> On Thu, Nov 28, 2013 at 9:37 AM, Demian Berjman 
>>> <dberj...@despegar.com>wrote:
>>> 
>>>> Hi.
>>>> 
>>>> Anyone has build a retry system (durable) for the producer in case the
>>>> kakfa cluster is down? We have certain messages that must be sent because
>>>> there are after a transaction that cannot be undo.
>>>> 
>>>> We could set the property "message.send.max.retries", but if the producer
>>>> goes down, we lost the messages.
>>>> 
>>>> Thanks,
>>>> 

Reply via email to