There are a few oddities of using the batching feature to get a kind of
transactionality (that wasn't the original intention):
1. It only actually works if you enable compression. Currently I don't
think we allow uncompressed recursive message batches. Without this the
batching only protects against producer failure, in the case of broker
failure their is no guarantee (either with or without replication) that you
won't fail in the middle of writing the batch. We could
consider separating out the batching from the compression support to make
this work in a more sane way.
2. It doesn't work across partitions or topics.

-Jay

On Thu, Oct 25, 2012 at 6:19 PM, Neha Narkhede <neha.narkh...@gmail.com>wrote:

> The closest concept of transaction on the publisher side, that I can
> think of, is using batch of messages in a single call to the
> synchronous producer.
>
> Precisely, you can configure a Kafka producer to use the "sync" mode
> and batch messages that require transactional guarantees in a
> single send() call. That will ensure that either all the messages in
> the batch are sent or none.
>
> Thanks,
> Neha
>
> On Thu, Oct 25, 2012 at 2:44 PM, Tom Brown <tombrow...@gmail.com> wrote:
> > Is there an accepted, or recommended way to make writes to a Kafka
> > queue idempotent, or within a transaction?
> >
> > I can configure my system such that each queue has exactly one producer.
> >
> > (If there are no accepted/recommended ways, I have a few ideas I would
> > like to propose. I would also be willing to implement them if needed)
> >
> > Thanks in advance!
> >
> > --Tom
>

Reply via email to