How do other systems deal with that? If I send "commit" to Oracle, but
my connection dies before I get the ack, is the data committed or not?

What about the other case? If I send "commit" to Oracle, but the
server dies before I get the ack, is the data committed or not?

In either case, how can I tell?

--Tom

On Fri, Oct 26, 2012 at 12:15 AM, Jun Rao <jun...@gmail.com> wrote:
> Even if you have transaction support, the same problem exists. If the
> client died before receiving the ack, it's not sure whether the broker
> really committed the data or not.
>
> To address this issue, the client can save the offset of committed messages
> periodically. On restart from a crash, it first reads all messages after
> the last saved offset. It then knows whether the last message is committed
> or not and can decide whether the message should be resent or not. This
> probably only works for a single producer.
>
> Thanks,
>
> Jun
>
> On Thu, Oct 25, 2012 at 6:31 PM, Philip O'Toole <phi...@loggly.com> wrote:
>
>> On Thu, Oct 25, 2012 at 06:19:04PM -0700, Neha Narkhede wrote:
>> > The closest concept of transaction on the publisher side, that I can
>> > think of, is using batch of messages in a single call to the
>> > synchronous producer.
>> >
>> > Precisely, you can configure a Kafka producer to use the "sync" mode
>> > and batch messages that require transactional guarantees in a
>> > single send() call. That will ensure that either all the messages in
>> > the batch are sent or none.
>>
>> This is an interesting feature -- something I wasn't aware of. Still it
>> doesn't solve the problem *completely*. As many people realise, it's still
>> possible for the batch of messages to get into Kafka fine, but the ack from
>> Kafka to be lost on its way back to the Producer. In that case the Producer
>> erroneously believes the messages didn't get in, and might re-send them.
>>
>> You guys *haven't* solved that issue, right? I believe you write about it
>> on
>> the Kafka site.
>>
>> >
>> > Thanks,
>> > Neha
>> >
>> > On Thu, Oct 25, 2012 at 2:44 PM, Tom Brown <tombrow...@gmail.com> wrote:
>> > > Is there an accepted, or recommended way to make writes to a Kafka
>> > > queue idempotent, or within a transaction?
>> > >
>> > > I can configure my system such that each queue has exactly one
>> producer.
>> > >
>> > > (If there are no accepted/recommended ways, I have a few ideas I would
>> > > like to propose. I would also be willing to implement them if needed)
>> > >
>> > > Thanks in advance!
>> > >
>> > > --Tom
>>
>> --
>> Philip O'Toole
>>
>> Senior Developer
>> Loggly, Inc.
>> San Francisco, Calif.
>> www.loggly.com
>>
>> Come join us!
>> http://loggly.com/company/careers/
>>

Reply via email to