Hello,
I'm seeing large lag sometimes on my MM2 clusters after restarting the
cluster (it runs on k8s).
I have 3 mm2 clusters, each one reads from 1 source and writes to the same
destination.
I am seeing these errors on one of my clusters right now.
WorkerSourceTask{id=MirrorSourceConnector-33} Failed to flush, timed out
while waiting for producer to flush outstanding 1948 messages
WorkerSourceTask{id=MirrorSourceConnector-33} Failed to commit offsets
The same cluster which has these errors is also lagging greatly (lag is
slowly going down, for other clusters they quickly recovered from lag post
update)
I saw some discussion on SO regarding similar issues but it was not
specific for MM2. The suggestions were
- either increase offset.flush.timeout.ms configuration parameter in
your Kafka Connect Worker Configs
- or you can reduce the amount of data being buffered by decreasing
producer.buffer.memory in your Kafka Connect Worker Configs. This turns
to be the best option when you have fairly large messages.
How do I implement these configs into my mm2 config if that's possible, or
even relevant? Has anyone faced similar behaviour?
Thanks,
Iftach
--
The above terms reflect a potential business arrangement, are provided
solely as a basis for further discussion, and are not intended to be and do
not constitute a legally binding obligation. No legally binding obligations
will be created, implied, or inferred until an agreement in final form is
executed in writing by all parties involved.
This email and any
attachments hereto may be confidential or privileged. If you received this
communication by mistake, please don't forward it to anyone else, please
erase all copies and attachments, and please let me know that it has gone
to the wrong person. Thanks.