Hi all,

I’m experimenting with the new Queues for Kafka (KIP-932) since we're
considering replacing our current broker (Google PubSub) and ran into the
maximum limit for `group.share.max.record.lock.duration.ms`, which is
currently
capped at 1 hour (3,600,000 ms).

My use case involves processing records that require a long-running process
that might exceed the 1-hour threshold. In this scenario, the work cannot
be transferred to another consumer mid-flight, so when the lock expires and
the record is redelivered, the original consumer’s ACK will fail.

On PubSub, we circumvent this issue by extending the ackdeadline period,
but on Kafka, we haven't worked out a way to do that.

I understand the cap is enforced broker-side via ShareGroupConfig, but I
wanted to ask:

- Is there any discussion or JIRA issue around increasing this maximum
limit?
- Has anyone else worked around this constraint in a production setting?

Any guidance or references would be greatly appreciated.

Thanks,
Iago David

Reply via email to