Hi Bhavesh,

I understand your point.
There was an old KIP with the similar idea which was not accepted by the
community in the end.
Maybe you can try to bring it back to the community again, or try to
propose your own KIP for this idea?
https://cwiki.apache.org/confluence/display/KAFKA/KIP-286%3A+producer.send%28%29+should+not+block+on+metadata+update

Thank you.
Luke

On Sat, Sep 24, 2022 at 6:36 AM Bhavesh Mistry <mistry.p.bhav...@gmail.com>
wrote:

> Hello Kafka Team,
>
> I would appreciate any insight into how to distinguish between Brocker Down
> vs Metadata Refresh not available due to timing issues.
>
> Thanks,
>
> Bhavesh
>
> On Mon, Sep 19, 2022 at 12:50 PM Bhavesh Mistry <
> mistry.p.bhav...@gmail.com>
> wrote:
>
> > Hello Kafka Team,
> >
> >
> >
> > We have an environment where Kafka Broker can go down for whatever
> reason.
> >
> >
> >
> > Hence, we had configured MAX_BLOCK_MS_CONFIG=0 because we wanted to drop
> > messages when brokers were NOT available.
> >
> >
> >
> > Now the issue is we get data loss due to METADATA not being available and
> > get this exception “*Topic <topic> not present in metadata after 0 ms.”.
> > *This is due to the fast metadata has expired and the next request to
> > send an event does not have metadata.
> >
> >
> >
> > Why does Kafka have his design?  Why can’t Kafka distinguish between
> > Broker down vs metadata refresh not available?  Is it reasonable to
> expect
> > metadata would refresh BEFORE it expires so metadata refresh doesn’t need
> > before it expires? Have Metadata ready before expires?  Any particular
> > reason send() has wait for metadata refresh vs background thread that
> > automatically refreshes metadata before it expires, hence send() method
> > never incur wait().
> >
> >
> > Let me know what suggestion you have to prevent the application thread
> > from blocking (MAX_BLOCK_MS_CONFIG) when the Kafka brokers are DOWN vs
> > metadata is NOT available due to expiration.
> >
> >
> >
> > Let me know your suggestions and what you think about metadata refresh.
> > Should Kafka Producer be proactively refreshing metadata intelligently
> > rather than what the producer does today?
> >
> >
> >
> >
> >
> > Thanks,
> > Bhavesh
> >
>

Reply via email to