Hello Kafka Team,

I would appreciate any insight into how to distinguish between Brocker Down
vs Metadata Refresh not available due to timing issues.

Thanks,

Bhavesh

On Mon, Sep 19, 2022 at 12:50 PM Bhavesh Mistry <mistry.p.bhav...@gmail.com>
wrote:

> Hello Kafka Team,
>
>
>
> We have an environment where Kafka Broker can go down for whatever reason.
>
>
>
> Hence, we had configured MAX_BLOCK_MS_CONFIG=0 because we wanted to drop
> messages when brokers were NOT available.
>
>
>
> Now the issue is we get data loss due to METADATA not being available and
> get this exception “*Topic <topic> not present in metadata after 0 ms.”.
> *This is due to the fast metadata has expired and the next request to
> send an event does not have metadata.
>
>
>
> Why does Kafka have his design?  Why can’t Kafka distinguish between
> Broker down vs metadata refresh not available?  Is it reasonable to expect
> metadata would refresh BEFORE it expires so metadata refresh doesn’t need
> before it expires? Have Metadata ready before expires?  Any particular
> reason send() has wait for metadata refresh vs background thread that
> automatically refreshes metadata before it expires, hence send() method
> never incur wait().
>
>
> Let me know what suggestion you have to prevent the application thread
> from blocking (MAX_BLOCK_MS_CONFIG) when the Kafka brokers are DOWN vs
> metadata is NOT available due to expiration.
>
>
>
> Let me know your suggestions and what you think about metadata refresh.
> Should Kafka Producer be proactively refreshing metadata intelligently
> rather than what the producer does today?
>
>
>
>
>
> Thanks,
> Bhavesh
>

Reply via email to