[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2020-09-11 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194590#comment-17194590
 ] 

Guozhang Wang commented on KAFKA-6127:
--

I think we can close this ticket now, since:

* All blocking client calls should be covered with KIP-572 now.
* Exceptional cases which would make producer / consumer to not be able to 
proceed should be covered with  other created tickets now.

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2020-09-11 Thread Sophie Blee-Goldman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194565#comment-17194565
 ] 

Sophie Blee-Goldman commented on KAFKA-6127:


[~guozhang] any thoughts on this? You probably touched on a lot of these calls 
back in "The Refactor" – I remember seeing that we started to catch 
TimeoutException in the Consumer#position call, for example.

I know there are some related issues with infinite blocking in exceptional 
cases, for example when a topic is deleted out from under a Producer, but we 
already have a separate ticket for that. If that sort of thing is the only one 
remaining we should maybe close this as a duplicate instead

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2020-09-11 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194548#comment-17194548
 ] 

Matthias J. Sax commented on KAFKA-6127:


Not 100% sure either :) – this ticket is almost 3 years old. If you have time, 
feel free to double check the usage of both consumer and producer calls. My gut 
feeling is, that this ticket might be void now (also considering the work on 
KIP-572), but we should double check before we close it.

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2020-09-11 Thread Sophie Blee-Goldman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194543#comment-17194543
 ] 

Sophie Blee-Goldman commented on KAFKA-6127:


[~mjsax] is this ticket still relevant? I didn't think any of the Consumer APIs 
we use block indefinitely anymore, as the default.api.timeout.ms config takes 
effect. Less sure about the Producer APIs

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2019-11-24 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16981229#comment-16981229
 ] 

Matthias J. Sax commented on KAFKA-6127:


Unassigned this ticked due to inactivity. Feel free to resume at any point.

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2019-01-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16731746#comment-16731746
 ] 

ASF GitHub Bot commented on KAFKA-6127:
---

ConcurrencyPractitioner commented on pull request #5333: [KAFKA-6127] Streams 
should never block infinitely
URL: https://github.com/apache/kafka/pull/5333
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Richard Yu
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-07-06 Thread Richard Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534469#comment-16534469
 ] 

Richard Yu commented on KAFKA-6127:
---

Well, with the current setup that we have with KafkaConsumer, it appears that a 
user-specified timeout was favored over a configuration-defined one. (An entire 
argument was waged over this topic, go figure) If we were to continue this 
trend, it might be that user-specified timeout would be implemented. (This is 
streams after all, some more flexibility on the amount of time one would block 
probably would be bet{{ter.)}}

 

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Richard Yu
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-07-04 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532997#comment-16532997
 ] 

Guozhang Wang commented on KAFKA-6127:
--

That is a good question. I think we should consider whether it worth to add 
more configs in Streams post KIP-266. Our current situation is:

1. we do have {{RETRIES_CONFIG}} and {{RETRY_BACKOFF_MS_CONFIG}} config in 
StreamsConfig, but today it is only used in global consumer's 
{{globalConsumer.endOffsets(topicPartitions);}} and {{partitionInfos = 
globalConsumer.partitionsFor(sourceTopic);}} because we always try to complete 
the restoration of global stores before starting any stream threads today.

2. we do not have anything like {{MAX_BLOCK_MS}} config, and we hard code 
different values today for some of the callers, and for some other calls we do 
not provide the timeout and hence relying on the consumer's request timeout 
value as default, and that value is {{40 * 1000}} by default.

The question for 2) is, whether it's better to define a global config and use 
that across all blocking calls to consumer; on the other hand, if for other 
callers if we should pass in specific timeout than just relying on request 
timeout.

The question for 1) is, whether we can just use a very large timeout value, and 
get rid of retries then?

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Richard Yu
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-07-04 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532315#comment-16532315
 ] 

ASF GitHub Bot commented on KAFKA-6127:
---

ConcurrencyPractitioner opened a new pull request #5333: [KAFKA-6127] Streams 
should never block infinitely
URL: https://github.com/apache/kafka/pull/5333
 
 
   
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Richard Yu
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-07-04 Thread Richard Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532301#comment-16532301
 ] 

Richard Yu commented on KAFKA-6127:
---

Hi [~guozhang] Now that we have created new methods (for {{KafkaConsumer}}) 
with bounded times, don't we also require some new configuration (determined by 
the user?) to determine how long we block? This change might require a KIP too. 
With the new {{Consumer}} API, we should also migrate {{KafkaStreams}} as well.

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Richard Yu
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> Thanks to 
> [KIP-266|[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886],]
>  the Consumer now has non-blocking variants that we can use, but the same is 
> not true of Producer. We can add non-blocking variants to Producer as well, 
> or set the appropriate config options to set the max timeout.
> Of course, we'd also need to be sure the catch the appropriate timeout 
> exceptions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-07-03 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531763#comment-16531763
 ] 

John Roesler commented on KAFKA-6127:
-

Good call, [~Yohan123], I'll update the description.

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> We might consider to use {{wakeup()}} calls to unblock those operations to 
> keep {{StreamThread}} in a responsive state.
> Note: there are discussion to add timeout to those calls, and thus, we could 
> get {{TimeoutExceptions}}. This would be easier to handle than using 
> {{wakeup()}}. Thus, we should keep an eye on those discussions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-07-03 Thread Richard Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530897#comment-16530897
 ] 

Richard Yu commented on KAFKA-6127:
---

This issue might be easier to fix now since KIP-266's PRs has been merged. I 
think we should rehabilitate this issue and get it resolved :).

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: exactly-once
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}. Also {{KafkaProducer#send()}} can block. 
> If EOS is enabled, {{KafkaProducer#initTransactions()}} also used to block 
> (fixed in KAFKA-6446) and we should double check the code if we handle this 
> case correctly.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> We might consider to use {{wakeup()}} calls to unblock those operations to 
> keep {{StreamThread}} in a responsive state.
> Note: there are discussion to add timeout to those calls, and thus, we could 
> get {{TimeoutExceptions}}. This would be easier to handle than using 
> {{wakeup()}}. Thus, we should keep an eye on those discussions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-03-07 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16390108#comment-16390108
 ] 

Matthias J. Sax commented on KAFKA-6127:


It might be bad to block in poll() as well – if somebody wants to shutdown the 
application we should have a mechanism to interrupt if we are in a blocking 
call.

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> We might consider to use {{wakeup()}} calls to unblock those operations to 
> keep {{StreamThread}} in a responsive state.
> Note: there are discussion to add timeout to those calls, and thus, we could 
> get {{TimeoutExceptions}}. This would be easier to handle than using 
> {{wakeup()}}. Thus, we should keep an eye on those discussions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-03-07 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16390105#comment-16390105
 ] 

Guozhang Wang commented on KAFKA-6127:
--

We are poll() with a timeout, but yes, admittedly today that timeout is not 
restrictedly respected since coordinator re-discover / re-join groups etc is 
not fully covered in that timeout. So I think the title may be changed to 
"Streams should not block unexpectedly" :P

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> We might consider to use {{wakeup()}} calls to unblock those operations to 
> keep {{StreamThread}} in a responsive state.
> Note: there are discussion to add timeout to those calls, and thus, we could 
> get {{TimeoutExceptions}}. This would be easier to handle than using 
> {{wakeup()}}. Thus, we should keep an eye on those discussions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-03-06 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389095#comment-16389095
 ] 

Ewen Cheslack-Postava commented on KAFKA-6127:
--

Isn't a basic `poll()` also an issue since it blocks on group membership?

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> We might consider to use {{wakeup()}} calls to unblock those operations to 
> keep {{StreamThread}} in a responsive state.
> Note: there are discussion to add timeout to those calls, and thus, we could 
> get {{TimeoutExceptions}}. This would be easier to handle than using 
> {{wakeup()}}. Thus, we should keep an eye on those discussions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-03-04 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16385636#comment-16385636
 ] 

Guozhang Wang commented on KAFKA-6127:
--

I think KAFKA-6608 is being addressed as part of KAFKA-4879 and KIP-226. But 
for Streams I agree that we should check for all its callers and handle the 
thrown timeout exceptions accordingly (and maybe differently).

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> We might consider to use {{wakeup()}} calls to unblock those operations to 
> keep {{StreamThread}} in a responsive state.
> Note: there are discussion to add timeout to those calls, and thus, we could 
> get {{TimeoutExceptions}}. This would be easier to handle than using 
> {{wakeup()}}. Thus, we should keep an eye on those discussions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6127) Streams should never block infinitely

2018-03-04 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16385472#comment-16385472
 ] 

Matthias J. Sax commented on KAFKA-6127:


If KAFKA-6608 is resolve, we need to consider how to handle a timeout exception 
of {{KafkaConsumer#position()}} within Streams.

> Streams should never block infinitely
> -
>
> Key: KAFKA-6127
> URL: https://issues.apache.org/jira/browse/KAFKA-6127
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Priority: Major
>
> Streams uses three consumer APIs that can block infinite: {{commitSync()}}, 
> {{committed()}}, and {{position()}}.
> If we block within one operation, the whole {{StreamThread}} would block, and 
> the instance does not make any progress, becomes unresponsive (for example, 
> {{KafkaStreams#close()}} suffers), and we also might drop out of the consumer 
> group.
> We might consider to use {{wakeup()}} calls to unblock those operations to 
> keep {{StreamThread}} in a responsive state.
> Note: there are discussion to add timeout to those calls, and thus, we could 
> get {{TimeoutExceptions}}. This would be easier to handle than using 
> {{wakeup()}}. Thus, we should keep an eye on those discussions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)