Hi, Right, broker on localhost and localhost connection don't help if the broker is actually down. It's not only about network reachability and such. We write to FS (yes, file system) as "well, what is the simplest thing that we can do and where we are least likely to hit some other issues? Write to FS.". Yes, much like Kafka brokers themselves, but this helps when Kafka is down for some reason.
Otis -- Performance Monitoring * Log Analytics * Search Analytics Solr & Elasticsearch Support * http://sematext.com/ On Thu, Nov 28, 2013 at 8:04 PM, Philip O'Toole <phi...@loggly.com> wrote: > Sure, and the disk could go bad, the machine itself could fail. > > My point is that my experience of Kafka 0.72 has been that it is very > reliable. The only time I have seen it go down is when the disk underneath > fills up. So if one is going to write all the code to stream to disk > *efficiently* from in-process, one should trade off that cost versus > connecting to a process over localhost, which has been shown to do a very > good job of just that. > > It's fair to ask what if the broker process goes down. But it's fair to > ask what if there is a bug in the stream-to-disk code you write? What if > your process goes down? > > I am not saying I am right. Just that engineering is about trade-offs, and > a Kafka instance running right there on the same machine, might provide the > reliability required. But it might not. As always, YMMV. > > Philip > > > On Nov 28, 2013, at 4:39 PM, Diego Parra <diegolpa...@gmail.com> wrote: > > > > Philip, what about if the broker goes down? > > I may be missing something. > > > > Diego. > > El 28/11/2013 21:09, "Philip O'Toole" <phi...@loggly.com> escribió: > > > >> By FS I guess you mean file system. > >> > >> In that case, if one is that concerned, why not run a single Kafka > broker > >> on the same machine, and connect to it over localhost? And disable ZK > mode > >> too, perhaps. > >> > >> I may be missing something, but I never fully understand why people try > >> really hard to build a stream-to-disk backup approach, when they might > be > >> able to couple tightly to Kafka, which, well, just streams to disk. > >> > >> Philip > >> > >> On Nov 28, 2013, at 3:58 PM, Otis Gospodnetic < > otis.gospodne...@gmail.com> > >> wrote: > >> > >>> Hi, > >>> > >>> We've done this at Sematext, where we use Kafka in all 3 > >> products/services > >>> you see in my signature. When we fail to push a message into Kafka we > >>> store it in the FS and from there we can process it later. > >>> > >>> Otis > >>> -- > >>> Performance Monitoring * Log Analytics * Search Analytics > >>> Solr & Elasticsearch Support * http://sematext.com/ > >>> > >>> > >>> On Thu, Nov 28, 2013 at 9:37 AM, Demian Berjman <dberj...@despegar.com > >>> wrote: > >>> > >>>> Hi. > >>>> > >>>> Anyone has build a retry system (durable) for the producer in case the > >>>> kakfa cluster is down? We have certain messages that must be sent > >> because > >>>> there are after a transaction that cannot be undo. > >>>> > >>>> We could set the property "message.send.max.retries", but if the > >> producer > >>>> goes down, we lost the messages. > >>>> > >>>> Thanks, > >> >