[
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16534469#comment-16534469
]
Richard Yu commented on KAFKA-6127:
-----------------------------------
Well, with the current setup that we have with KafkaConsumer, it appears that a
user-specified timeout was favored over a configuration-defined one. (An entire
argument was waged over this topic, go figure) If we were to continue this
trend, it might be that user-specified timeout would be implemented. (This is
streams after all, some more flexibility on the amount of time one would block
probably would be bet{{ter.)}}
> Streams should never block infinitely
> -------------------------------------
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
> Issue Type: Bug
> Components: streams
> Affects Versions: 1.0.0
> Reporter: Matthias J. Sax
> Assignee: Richard Yu
> Priority: Major
> Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}},
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block.
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block
> (fixed in KAFKA-6446) and we should double check the code if we handle this
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and
> the instance does not make any progress, becomes unresponsive (for example,
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer
> group.
> Thanks to
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
> the Consumer now has non-blocking variants that we can use, but the same is
> not true of Producer. We can add non-blocking variants to Producer as well,
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout
> exceptions.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)