There are many options. Another simple consumer could read from it to a second 
broker. 

Philip

> On Nov 28, 2013, at 4:18 PM, Steve Morin <steve.mo...@gmail.com> wrote:
> 
> Philip,
>  How would do you mirror this to a main Kafka instance?
> -Steve
> 
>> On Nov 28, 2013, at 16:14, Philip O'Toole <phi...@loggly.com> wrote:
>> 
>> I should add in our custom producers we buffer in RAM if required, so Kafka 
>> can be restarted etc. But I would never code streaming to disk now. I would 
>> just run a Kafka instance on the same node. 
>> 
>> Philip
>> 
>>> On Nov 28, 2013, at 4:08 PM, Philip O'Toole <phi...@loggly.com> wrote:
>>> 
>>> By FS I guess you mean file system. 
>>> 
>>> In that case, if one is that concerned, why not run a single Kafka broker 
>>> on the same machine, and connect to it over localhost? And disable ZK mode 
>>> too, perhaps. 
>>> 
>>> I may be missing something, but I never fully understand why people try 
>>> really hard to build a stream-to-disk backup approach, when they might be 
>>> able to couple tightly to Kafka, which, well, just streams to disk. 
>>> Philip
>>> 
>>>> On Nov 28, 2013, at 3:58 PM, Otis Gospodnetic <otis.gospodne...@gmail.com> 
>>>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> We've done this at Sematext, where we use Kafka in all 3 products/services
>>>> you see in my signature.  When we fail to push a message into Kafka we
>>>> store it in the FS and from there we can process it later.
>>>> 
>>>> Otis
>>>> --
>>>> Performance Monitoring * Log Analytics * Search Analytics
>>>> Solr & Elasticsearch Support * http://sematext.com/
>>>> 
>>>> 
>>>> On Thu, Nov 28, 2013 at 9:37 AM, Demian Berjman 
>>>> <dberj...@despegar.com>wrote:
>>>> 
>>>>> Hi.
>>>>> 
>>>>> Anyone has build a retry system (durable) for the producer in case the
>>>>> kakfa cluster is down? We have certain messages that must be sent because
>>>>> there are after a transaction that cannot be undo.
>>>>> 
>>>>> We could set the property "message.send.max.retries", but if the producer
>>>>> goes down, we lost the messages.
>>>>> 
>>>>> Thanks,
>>>>> 

Reply via email to