Hi all,
I have a simple stream application pipeline
src.filter.aggragteByKey.mapValues.forEach

>From time to time I get the following exception:
Error sending record to topic test-stream-key-table-changelog
org.apache.kafka.common.errors.TimeoutException: Batch containing 2
record(s) expired due to timeout while requesting metadata from brokers for
test-stream-key-table-changelog-0

What could be causing the issue?
I investigated a bit and saw none of the stage takes a long time. Even in
forEach stage where we commit the output to external db takes sub 100 ms in
worst case.

I have right now done a workaround of
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 1800000);

Increased the default timeout from 30 seconds to 3 minutes.

However to dig deep into the issue where can the problem be?

Is it that some stage is taking beyond 30 seconds to execute. Or is it some
network issue where it is taking a long time to connect to broker itself?

Any logging that I can enable at the streams side to get more complete
stacktraces?

Note that issue occurs in bunches. Then everything works fine for a while
then these exceptions come in bunch and then it works fine for sometime
then again exceptions and so on.

Note that my version is kafka_2.10-0.10.0.1.

Thanks
Sachin

Reply via email to