Ah that is right, we just need to make sure this ticket goes along with
flush() call in the next release then.

On Tue, Mar 24, 2015 at 3:09 PM, Jun Rao <j...@confluent.io> wrote:

> Hi, Guozhang,
>
> The flush() was added to the new producer in trunk, not in 0.8.2, right?
>
> Thanks,
>
> Jun
>
> On Tue, Mar 24, 2015 at 2:42 PM, Guozhang Wang <wangg...@gmail.com> wrote:
>
> > Hello,
> >
> > We found a serious bug while testing flush() calls in the new producer,
> > which is summarized in KAFKA-2042.
> >
> > In general, when the producer starts up it will try to refresh metadata
> > with empty topic list, and hence get all the topic metadata. When sending
> > the message with some topic later, it will hence not cause the topic to
> be
> > added into metadata's topic list since the metadata is available. When
> the
> > data is still sitting in the accumulator and a new topic is created, that
> > will cause metadata refresh with just this single topic, hence losing the
> > metadata for any other topics. Under usual scenarios the messages will be
> > sitting in the accumulator until another send() is triggered with the
> same
> > topic, but with flush() as a blocking call the likelihood of this issue
> > being exposed that messages gets blocked forever inside flush() could be
> > largely increased.
> >
> > I am writing to ask if people think this problem is severe enough that
> > requires another bug-fix release.
> >
> > -- Guozhang
> >
>



-- 
-- Guozhang

Reply via email to