Yeah that is a good question. I think we started with good throughput and
scalability and are then adding in the features we can without breaking
these things or over-complicating the design. For example in 0.8 we have *
much* better delivery guarantees than most messaging systems (imo).

The use case I gave is my motivation for being interested in this. Giving
good semantics to incremental processing is an important use case and one
that often goes along with high throughput requirements (which is why you
wouldn't just use a traditional RDBMS).

I think the question here is whether there is an implementation of
transactions that works well in our design and doesn't kill performance.

-Jay

On Fri, Oct 26, 2012 at 11:29 AM, Jason Rosenberg <j...@squareup.com> wrote:

> Correct me if I'm wrong, but I thought the intention of kafka was not
> really to handle this use case (e.g. transactional writing nor guaranteed
> delivery semantics).  Why wouldn't you use a jms queue system (e.g.
> activemq) if you need transactional messaging, backed by a database, etc.?
>
> Jason
>
>
> On Fri, Oct 26, 2012 at 11:18 AM, Jay Kreps <jay.kr...@gmail.com> wrote:
>
> > There are a few oddities of using the batching feature to get a kind of
> > transactionality (that wasn't the original intention):
> > 1. It only actually works if you enable compression. Currently I don't
> > think we allow uncompressed recursive message batches. Without this the
> > batching only protects against producer failure, in the case of broker
> > failure their is no guarantee (either with or without replication) that
> you
> > won't fail in the middle of writing the batch. We could
> > consider separating out the batching from the compression support to make
> > this work in a more sane way.
> > 2. It doesn't work across partitions or topics.
> >
> > -Jay
> >
> > On Thu, Oct 25, 2012 at 6:19 PM, Neha Narkhede <neha.narkh...@gmail.com
> > >wrote:
> >
> > > The closest concept of transaction on the publisher side, that I can
> > > think of, is using batch of messages in a single call to the
> > > synchronous producer.
> > >
> > > Precisely, you can configure a Kafka producer to use the "sync" mode
> > > and batch messages that require transactional guarantees in a
> > > single send() call. That will ensure that either all the messages in
> > > the batch are sent or none.
> > >
> > > Thanks,
> > > Neha
> > >
> > > On Thu, Oct 25, 2012 at 2:44 PM, Tom Brown <tombrow...@gmail.com>
> wrote:
> > > > Is there an accepted, or recommended way to make writes to a Kafka
> > > > queue idempotent, or within a transaction?
> > > >
> > > > I can configure my system such that each queue has exactly one
> > producer.
> > > >
> > > > (If there are no accepted/recommended ways, I have a few ideas I
> would
> > > > like to propose. I would also be willing to implement them if needed)
> > > >
> > > > Thanks in advance!
> > > >
> > > > --Tom
> > >
> >
>

Reply via email to